Test Report: KVM_Linux 21647

                    
                      f5f0858587e77e8c1559a01ec4b2a40a06b76dc9:2025-10-18:41961
                    
                

Test fail (1/345)

Order failed test Duration
349 TestStartStop/group/default-k8s-diff-port/serial/Pause 40.41
x
+
TestStartStop/group/default-k8s-diff-port/serial/Pause (40.41s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/Pause
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-amd64 pause -p default-k8s-diff-port-948988 --alsologtostderr -v=1
start_stop_delete_test.go:309: (dbg) Done: out/minikube-linux-amd64 pause -p default-k8s-diff-port-948988 --alsologtostderr -v=1: (1.393949759s)
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p default-k8s-diff-port-948988 -n default-k8s-diff-port-948988
start_stop_delete_test.go:309: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.APIServer}} -p default-k8s-diff-port-948988 -n default-k8s-diff-port-948988: exit status 2 (15.779342086s)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:309: status error: exit status 2 (may be ok)
start_stop_delete_test.go:309: post-pause apiserver status = "Stopped"; want = "Paused"
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Kubelet}} -p default-k8s-diff-port-948988 -n default-k8s-diff-port-948988
E1018 12:26:33.118930    9909 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21647-6010/.minikube/profiles/gvisor-073301/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
start_stop_delete_test.go:309: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Kubelet}} -p default-k8s-diff-port-948988 -n default-k8s-diff-port-948988: exit status 2 (15.830711408s)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:309: status error: exit status 2 (may be ok)
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-amd64 unpause -p default-k8s-diff-port-948988 --alsologtostderr -v=1
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p default-k8s-diff-port-948988 -n default-k8s-diff-port-948988
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Kubelet}} -p default-k8s-diff-port-948988 -n default-k8s-diff-port-948988
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:223: ======>  post-mortem[TestStartStop/group/default-k8s-diff-port/serial/Pause]: network settings <======
helpers_test.go:230: HOST ENV snapshots: PROXY env: HTTP_PROXY="<empty>" HTTPS_PROXY="<empty>" NO_PROXY="<empty>"
helpers_test.go:247: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p default-k8s-diff-port-948988 -n default-k8s-diff-port-948988
helpers_test.go:252: <<< TestStartStop/group/default-k8s-diff-port/serial/Pause FAILED: start of post-mortem logs <<<
helpers_test.go:253: ======>  post-mortem[TestStartStop/group/default-k8s-diff-port/serial/Pause]: minikube logs <======
helpers_test.go:255: (dbg) Run:  out/minikube-linux-amd64 -p default-k8s-diff-port-948988 logs -n 25
helpers_test.go:255: (dbg) Done: out/minikube-linux-amd64 -p default-k8s-diff-port-948988 logs -n 25: (2.340900628s)
helpers_test.go:260: TestStartStop/group/default-k8s-diff-port/serial/Pause logs: 
-- stdout --
	
	==> Audit <==
	┌─────────┬────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┬──────────────────────────────┬─────────┬─────────┬─────────────────────┬─────────────
────────┐
	│ COMMAND │                                                                                                                      ARGS                                                                                                                      │           PROFILE            │  USER   │ VERSION │     START TIME      │      END TIME       │
	├─────────┼────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┼──────────────────────────────┼─────────┼─────────┼─────────────────────┼─────────────
────────┤
	│ stop    │ -p default-k8s-diff-port-948988 --alsologtostderr -v=3                                                                                                                                                                                         │ default-k8s-diff-port-948988 │ jenkins │ v1.37.0 │ 18 Oct 25 12:24 UTC │ 18 Oct 25 12:25 UTC │
	│ addons  │ enable metrics-server -p embed-certs-270191 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain                                                                                                       │ embed-certs-270191           │ jenkins │ v1.37.0 │ 18 Oct 25 12:25 UTC │ 18 Oct 25 12:25 UTC │
	│ addons  │ enable metrics-server -p newest-cni-661287 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain                                                                                                        │ newest-cni-661287            │ jenkins │ v1.37.0 │ 18 Oct 25 12:25 UTC │ 18 Oct 25 12:25 UTC │
	│ stop    │ -p embed-certs-270191 --alsologtostderr -v=3                                                                                                                                                                                                   │ embed-certs-270191           │ jenkins │ v1.37.0 │ 18 Oct 25 12:25 UTC │ 18 Oct 25 12:25 UTC │
	│ stop    │ -p newest-cni-661287 --alsologtostderr -v=3                                                                                                                                                                                                    │ newest-cni-661287            │ jenkins │ v1.37.0 │ 18 Oct 25 12:25 UTC │ 18 Oct 25 12:25 UTC │
	│ addons  │ enable dashboard -p default-k8s-diff-port-948988 --images=MetricsScraper=registry.k8s.io/echoserver:1.4                                                                                                                                        │ default-k8s-diff-port-948988 │ jenkins │ v1.37.0 │ 18 Oct 25 12:25 UTC │ 18 Oct 25 12:25 UTC │
	│ start   │ -p default-k8s-diff-port-948988 --memory=3072 --alsologtostderr --wait=true --apiserver-port=8444 --driver=kvm2  --auto-update-drivers=false --kubernetes-version=v1.34.1                                                                      │ default-k8s-diff-port-948988 │ jenkins │ v1.37.0 │ 18 Oct 25 12:25 UTC │ 18 Oct 25 12:25 UTC │
	│ start   │ -p embed-certs-270191 --memory=3072 --alsologtostderr --wait=true --embed-certs --driver=kvm2  --auto-update-drivers=false --kubernetes-version=v1.34.1                                                                                        │ embed-certs-270191           │ jenkins │ v1.37.0 │ 18 Oct 25 12:25 UTC │ 18 Oct 25 12:26 UTC │
	│ addons  │ enable dashboard -p newest-cni-661287 --images=MetricsScraper=registry.k8s.io/echoserver:1.4                                                                                                                                                   │ newest-cni-661287            │ jenkins │ v1.37.0 │ 18 Oct 25 12:25 UTC │ 18 Oct 25 12:25 UTC │
	│ start   │ -p newest-cni-661287 --memory=3072 --alsologtostderr --wait=apiserver,system_pods,default_sa --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=kvm2  --auto-update-drivers=false --kubernetes-version=v1.34.1 │ newest-cni-661287            │ jenkins │ v1.37.0 │ 18 Oct 25 12:25 UTC │                     │
	│ image   │ no-preload-839073 image list --format=json                                                                                                                                                                                                     │ no-preload-839073            │ jenkins │ v1.37.0 │ 18 Oct 25 12:25 UTC │ 18 Oct 25 12:25 UTC │
	│ pause   │ -p no-preload-839073 --alsologtostderr -v=1                                                                                                                                                                                                    │ no-preload-839073            │ jenkins │ v1.37.0 │ 18 Oct 25 12:25 UTC │ 18 Oct 25 12:25 UTC │
	│ unpause │ -p no-preload-839073 --alsologtostderr -v=1                                                                                                                                                                                                    │ no-preload-839073            │ jenkins │ v1.37.0 │ 18 Oct 25 12:25 UTC │ 18 Oct 25 12:25 UTC │
	│ delete  │ -p no-preload-839073                                                                                                                                                                                                                           │ no-preload-839073            │ jenkins │ v1.37.0 │ 18 Oct 25 12:25 UTC │ 18 Oct 25 12:25 UTC │
	│ delete  │ -p no-preload-839073                                                                                                                                                                                                                           │ no-preload-839073            │ jenkins │ v1.37.0 │ 18 Oct 25 12:25 UTC │ 18 Oct 25 12:25 UTC │
	│ start   │ -p auto-720125 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --driver=kvm2  --auto-update-drivers=false                                                                                                                       │ auto-720125                  │ jenkins │ v1.37.0 │ 18 Oct 25 12:25 UTC │                     │
	│ image   │ default-k8s-diff-port-948988 image list --format=json                                                                                                                                                                                          │ default-k8s-diff-port-948988 │ jenkins │ v1.37.0 │ 18 Oct 25 12:26 UTC │ 18 Oct 25 12:26 UTC │
	│ pause   │ -p default-k8s-diff-port-948988 --alsologtostderr -v=1                                                                                                                                                                                         │ default-k8s-diff-port-948988 │ jenkins │ v1.37.0 │ 18 Oct 25 12:26 UTC │ 18 Oct 25 12:26 UTC │
	│ image   │ embed-certs-270191 image list --format=json                                                                                                                                                                                                    │ embed-certs-270191           │ jenkins │ v1.37.0 │ 18 Oct 25 12:26 UTC │ 18 Oct 25 12:26 UTC │
	│ pause   │ -p embed-certs-270191 --alsologtostderr -v=1                                                                                                                                                                                                   │ embed-certs-270191           │ jenkins │ v1.37.0 │ 18 Oct 25 12:26 UTC │ 18 Oct 25 12:26 UTC │
	│ unpause │ -p embed-certs-270191 --alsologtostderr -v=1                                                                                                                                                                                                   │ embed-certs-270191           │ jenkins │ v1.37.0 │ 18 Oct 25 12:26 UTC │ 18 Oct 25 12:26 UTC │
	│ delete  │ -p embed-certs-270191                                                                                                                                                                                                                          │ embed-certs-270191           │ jenkins │ v1.37.0 │ 18 Oct 25 12:26 UTC │ 18 Oct 25 12:26 UTC │
	│ delete  │ -p embed-certs-270191                                                                                                                                                                                                                          │ embed-certs-270191           │ jenkins │ v1.37.0 │ 18 Oct 25 12:26 UTC │ 18 Oct 25 12:26 UTC │
	│ start   │ -p kindnet-720125 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=kindnet --driver=kvm2  --auto-update-drivers=false                                                                                                      │ kindnet-720125               │ jenkins │ v1.37.0 │ 18 Oct 25 12:26 UTC │                     │
	│ unpause │ -p default-k8s-diff-port-948988 --alsologtostderr -v=1                                                                                                                                                                                         │ default-k8s-diff-port-948988 │ jenkins │ v1.37.0 │ 18 Oct 25 12:26 UTC │ 18 Oct 25 12:26 UTC │
	└─────────┴────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┴──────────────────────────────┴─────────┴─────────┴─────────────────────┴─────────────
────────┘
	
	
	==> Last Start <==
	Log file created at: 2025/10/18 12:26:39
	Running on machine: ubuntu-20-agent-7
	Binary: Built with gc go1.24.6 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1018 12:26:39.638929   54024 out.go:360] Setting OutFile to fd 1 ...
	I1018 12:26:39.639215   54024 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1018 12:26:39.639226   54024 out.go:374] Setting ErrFile to fd 2...
	I1018 12:26:39.639232   54024 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1018 12:26:39.639463   54024 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21647-6010/.minikube/bin
	I1018 12:26:39.639986   54024 out.go:368] Setting JSON to false
	I1018 12:26:39.640948   54024 start.go:131] hostinfo: {"hostname":"ubuntu-20-agent-7","uptime":4147,"bootTime":1760786253,"procs":222,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"22.04","kernelVersion":"6.8.0-1041-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I1018 12:26:39.641036   54024 start.go:141] virtualization: kvm guest
	I1018 12:26:39.642912   54024 out.go:179] * [kindnet-720125] minikube v1.37.0 on Ubuntu 22.04 (kvm/amd64)
	I1018 12:26:39.644319   54024 notify.go:220] Checking for updates...
	I1018 12:26:39.644359   54024 out.go:179]   - MINIKUBE_LOCATION=21647
	I1018 12:26:39.645575   54024 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1018 12:26:39.646808   54024 out.go:179]   - KUBECONFIG=/home/jenkins/minikube-integration/21647-6010/kubeconfig
	I1018 12:26:39.647991   54024 out.go:179]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/21647-6010/.minikube
	I1018 12:26:39.649134   54024 out.go:179]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I1018 12:26:39.650480   54024 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I1018 12:26:39.652192   54024 config.go:182] Loaded profile config "auto-720125": Driver=kvm2, ContainerRuntime=docker, KubernetesVersion=v1.34.1
	I1018 12:26:39.652340   54024 config.go:182] Loaded profile config "default-k8s-diff-port-948988": Driver=kvm2, ContainerRuntime=docker, KubernetesVersion=v1.34.1
	I1018 12:26:39.652479   54024 config.go:182] Loaded profile config "newest-cni-661287": Driver=kvm2, ContainerRuntime=docker, KubernetesVersion=v1.34.1
	I1018 12:26:39.652597   54024 driver.go:421] Setting default libvirt URI to qemu:///system
	I1018 12:26:39.691700   54024 out.go:179] * Using the kvm2 driver based on user configuration
	I1018 12:26:39.692905   54024 start.go:305] selected driver: kvm2
	I1018 12:26:39.692920   54024 start.go:925] validating driver "kvm2" against <nil>
	I1018 12:26:39.692931   54024 start.go:936] status for kvm2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1018 12:26:39.693690   54024 install.go:66] acquiring lock: {Name:mk900956b073697a4aa6c80a27c6bb0742a99a53 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1018 12:26:39.693776   54024 install.go:138] Validating docker-machine-driver-kvm2, PATH=/home/jenkins/minikube-integration/21647-6010/.minikube/bin:/home/jenkins/workspace/KVM_Linux_integration/out/:/usr/local/bin:/usr/bin:/bin:/usr/local/games:/usr/games:/usr/local/go/bin:/home/jenkins/go/bin:/usr/local/bin/:/usr/local/go/bin/:/home/jenkins/go/bin
	I1018 12:26:39.709001   54024 install.go:163] /home/jenkins/workspace/KVM_Linux_integration/out/docker-machine-driver-kvm2 version is 1.37.0
	I1018 12:26:39.709030   54024 install.go:138] Validating docker-machine-driver-kvm2, PATH=/home/jenkins/minikube-integration/21647-6010/.minikube/bin:/home/jenkins/workspace/KVM_Linux_integration/out/:/usr/local/bin:/usr/bin:/bin:/usr/local/games:/usr/games:/usr/local/go/bin:/home/jenkins/go/bin:/usr/local/bin/:/usr/local/go/bin/:/home/jenkins/go/bin
	I1018 12:26:39.724060   54024 install.go:163] /home/jenkins/workspace/KVM_Linux_integration/out/docker-machine-driver-kvm2 version is 1.37.0
	I1018 12:26:39.724111   54024 start_flags.go:327] no existing cluster config was found, will generate one from the flags 
	I1018 12:26:39.724397   54024 start_flags.go:992] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1018 12:26:39.724424   54024 cni.go:84] Creating CNI manager for "kindnet"
	I1018 12:26:39.724429   54024 start_flags.go:336] Found "CNI" CNI - setting NetworkPlugin=cni
	I1018 12:26:39.724476   54024 start.go:349] cluster config:
	{Name:kindnet-720125 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760609789-21757@sha256:9824b20f4774128fcb298ad0e6cac7649729886cfba9d444b2305c743a5044c6 Memory:3072 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:kindnet-720125 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime
:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:kindnet} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgen
tPID:0 GPUs: AutoPauseInterval:1m0s}
	I1018 12:26:39.724562   54024 iso.go:125] acquiring lock: {Name:mk7b9977f44c882a06d0a932f05bd4c8e4cea871 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1018 12:26:39.726635   54024 out.go:179] * Starting "kindnet-720125" primary control-plane node in "kindnet-720125" cluster
	I1018 12:26:39.727995   54024 preload.go:183] Checking if preload exists for k8s version v1.34.1 and runtime docker
	I1018 12:26:39.728049   54024 preload.go:198] Found local preload: /home/jenkins/minikube-integration/21647-6010/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-docker-overlay2-amd64.tar.lz4
	I1018 12:26:39.728060   54024 cache.go:58] Caching tarball of preloaded images
	I1018 12:26:39.728181   54024 preload.go:233] Found /home/jenkins/minikube-integration/21647-6010/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-docker-overlay2-amd64.tar.lz4 in cache, skipping download
	I1018 12:26:39.728194   54024 cache.go:61] Finished verifying existence of preloaded tar for v1.34.1 on docker
	I1018 12:26:39.728350   54024 profile.go:143] Saving config to /home/jenkins/minikube-integration/21647-6010/.minikube/profiles/kindnet-720125/config.json ...
	I1018 12:26:39.728376   54024 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21647-6010/.minikube/profiles/kindnet-720125/config.json: {Name:mkf1b74ab9b12d679411e2c6e2e2149cae3e0078 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1018 12:26:39.728580   54024 start.go:360] acquireMachinesLock for kindnet-720125: {Name:mk547bbf69b426adc37163c0f135f5803e3e7ae0 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I1018 12:26:39.728617   54024 start.go:364] duration metric: took 19.75µs to acquireMachinesLock for "kindnet-720125"
	I1018 12:26:39.728642   54024 start.go:93] Provisioning new machine with config: &{Name:kindnet-720125 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/21757/minikube-v1.37.0-1760609724-21757-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760609789-21757@sha256:9824b20f4774128fcb298ad0e6cac7649729886cfba9d444b2305c743a5044c6 Memory:3072 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kube
rnetesVersion:v1.34.1 ClusterName:kindnet-720125 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:kindnet} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docke
r BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:docker ControlPlane:true Worker:true}
	I1018 12:26:39.728718   54024 start.go:125] createHost starting for "" (driver="kvm2")
	I1018 12:26:35.461906   52813 ssh_runner.go:235] Completed: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4: (1.481663654s)
	I1018 12:26:35.461943   52813 ssh_runner.go:146] rm: /preloaded.tar.lz4
	I1018 12:26:35.505542   52813 ssh_runner.go:195] Run: sudo cat /var/lib/docker/image/overlay2/repositories.json
	I1018 12:26:35.519942   52813 ssh_runner.go:362] scp memory --> /var/lib/docker/image/overlay2/repositories.json (2632 bytes)
	I1018 12:26:35.544751   52813 ssh_runner.go:195] Run: sudo systemctl reset-failed docker
	I1018 12:26:35.561575   52813 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1018 12:26:35.715918   52813 ssh_runner.go:195] Run: sudo systemctl restart docker
	I1018 12:26:38.056356   52813 ssh_runner.go:235] Completed: sudo systemctl restart docker: (2.34040401s)
	I1018 12:26:38.056485   52813 ssh_runner.go:195] Run: docker images --format {{.Repository}}:{{.Tag}}
	I1018 12:26:38.085796   52813 docker.go:691] Got preloaded images: -- stdout --
	registry.k8s.io/kube-scheduler:v1.34.1
	registry.k8s.io/kube-apiserver:v1.34.1
	registry.k8s.io/kube-controller-manager:v1.34.1
	registry.k8s.io/kube-proxy:v1.34.1
	registry.k8s.io/etcd:3.6.4-0
	registry.k8s.io/pause:3.10.1
	registry.k8s.io/coredns/coredns:v1.12.1
	gcr.io/k8s-minikube/storage-provisioner:v5
	
	-- /stdout --
	I1018 12:26:38.085832   52813 cache_images.go:85] Images are preloaded, skipping loading
	I1018 12:26:38.085846   52813 kubeadm.go:934] updating node { 192.168.72.13 8443 v1.34.1 docker true true} ...
	I1018 12:26:38.085985   52813 kubeadm.go:946] kubelet [Unit]
	Wants=docker.socket
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.34.1/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=auto-720125 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.72.13
	
	[Install]
	 config:
	{KubernetesVersion:v1.34.1 ClusterName:auto-720125 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I1018 12:26:38.086071   52813 ssh_runner.go:195] Run: docker info --format {{.CgroupDriver}}
	I1018 12:26:38.149565   52813 cni.go:84] Creating CNI manager for ""
	I1018 12:26:38.149605   52813 cni.go:158] "kvm2" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I1018 12:26:38.149622   52813 kubeadm.go:85] Using pod CIDR: 10.244.0.0/16
	I1018 12:26:38.149639   52813 kubeadm.go:190] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.72.13 APIServerPort:8443 KubernetesVersion:v1.34.1 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:auto-720125 NodeName:auto-720125 DNSDomain:cluster.local CRISocket:/var/run/cri-dockerd.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.72.13"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.72.13 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/etc/kubern
etes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/cri-dockerd.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I1018 12:26:38.149863   52813 kubeadm.go:196] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.72.13
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/cri-dockerd.sock
	  name: "auto-720125"
	  kubeletExtraArgs:
	    - name: "node-ip"
	      value: "192.168.72.13"
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.72.13"]
	  extraArgs:
	    - name: "enable-admission-plugins"
	      value: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    - name: "allocate-node-cidrs"
	      value: "true"
	    - name: "leader-elect"
	      value: "false"
	scheduler:
	  extraArgs:
	    - name: "leader-elect"
	      value: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	kubernetesVersion: v1.34.1
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/cri-dockerd.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1018 12:26:38.149950   52813 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.34.1
	I1018 12:26:38.167666   52813 binaries.go:44] Found k8s binaries, skipping transfer
	I1018 12:26:38.167750   52813 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I1018 12:26:38.182469   52813 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (311 bytes)
	I1018 12:26:38.210498   52813 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I1018 12:26:38.235674   52813 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2215 bytes)
	I1018 12:26:38.272656   52813 ssh_runner.go:195] Run: grep 192.168.72.13	control-plane.minikube.internal$ /etc/hosts
	I1018 12:26:38.278428   52813 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.72.13	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1018 12:26:38.295186   52813 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1018 12:26:38.477493   52813 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1018 12:26:38.516693   52813 certs.go:69] Setting up /home/jenkins/minikube-integration/21647-6010/.minikube/profiles/auto-720125 for IP: 192.168.72.13
	I1018 12:26:38.516721   52813 certs.go:195] generating shared ca certs ...
	I1018 12:26:38.516742   52813 certs.go:227] acquiring lock for ca certs: {Name:mk4e9b668d7f4a08d373c26a5a5beadd4b363eae Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1018 12:26:38.516897   52813 certs.go:236] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/21647-6010/.minikube/ca.key
	I1018 12:26:38.516956   52813 certs.go:236] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/21647-6010/.minikube/proxy-client-ca.key
	I1018 12:26:38.516971   52813 certs.go:257] generating profile certs ...
	I1018 12:26:38.517059   52813 certs.go:364] generating signed profile cert for "minikube-user": /home/jenkins/minikube-integration/21647-6010/.minikube/profiles/auto-720125/client.key
	I1018 12:26:38.517080   52813 crypto.go:68] Generating cert /home/jenkins/minikube-integration/21647-6010/.minikube/profiles/auto-720125/client.crt with IP's: []
	I1018 12:26:38.795006   52813 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/21647-6010/.minikube/profiles/auto-720125/client.crt ...
	I1018 12:26:38.795041   52813 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21647-6010/.minikube/profiles/auto-720125/client.crt: {Name:mke50b87cc8afab1bea24439b2b8f8b4fce785c7 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1018 12:26:38.795221   52813 crypto.go:164] Writing key to /home/jenkins/minikube-integration/21647-6010/.minikube/profiles/auto-720125/client.key ...
	I1018 12:26:38.795236   52813 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21647-6010/.minikube/profiles/auto-720125/client.key: {Name:mk73a13799ed8cba8c6cf5586dd849d9aa3376fc Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1018 12:26:38.795369   52813 certs.go:364] generating signed profile cert for "minikube": /home/jenkins/minikube-integration/21647-6010/.minikube/profiles/auto-720125/apiserver.key.5f192319
	I1018 12:26:38.795387   52813 crypto.go:68] Generating cert /home/jenkins/minikube-integration/21647-6010/.minikube/profiles/auto-720125/apiserver.crt.5f192319 with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 192.168.72.13]
	I1018 12:26:39.015985   52813 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/21647-6010/.minikube/profiles/auto-720125/apiserver.crt.5f192319 ...
	I1018 12:26:39.016017   52813 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21647-6010/.minikube/profiles/auto-720125/apiserver.crt.5f192319: {Name:mk48dc89d0bc936861c01af4faa11afa9b99fc7f Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1018 12:26:39.016173   52813 crypto.go:164] Writing key to /home/jenkins/minikube-integration/21647-6010/.minikube/profiles/auto-720125/apiserver.key.5f192319 ...
	I1018 12:26:39.016187   52813 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21647-6010/.minikube/profiles/auto-720125/apiserver.key.5f192319: {Name:mk06903a8537a759ab5885d9e1ce94cdbffcbf0b Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1018 12:26:39.016265   52813 certs.go:382] copying /home/jenkins/minikube-integration/21647-6010/.minikube/profiles/auto-720125/apiserver.crt.5f192319 -> /home/jenkins/minikube-integration/21647-6010/.minikube/profiles/auto-720125/apiserver.crt
	I1018 12:26:39.016371   52813 certs.go:386] copying /home/jenkins/minikube-integration/21647-6010/.minikube/profiles/auto-720125/apiserver.key.5f192319 -> /home/jenkins/minikube-integration/21647-6010/.minikube/profiles/auto-720125/apiserver.key
	I1018 12:26:39.016432   52813 certs.go:364] generating signed profile cert for "aggregator": /home/jenkins/minikube-integration/21647-6010/.minikube/profiles/auto-720125/proxy-client.key
	I1018 12:26:39.016447   52813 crypto.go:68] Generating cert /home/jenkins/minikube-integration/21647-6010/.minikube/profiles/auto-720125/proxy-client.crt with IP's: []
	I1018 12:26:39.194387   52813 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/21647-6010/.minikube/profiles/auto-720125/proxy-client.crt ...
	I1018 12:26:39.194419   52813 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21647-6010/.minikube/profiles/auto-720125/proxy-client.crt: {Name:mk9243a20439ab9292d13a3cab98b56367a296c4 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1018 12:26:39.194631   52813 crypto.go:164] Writing key to /home/jenkins/minikube-integration/21647-6010/.minikube/profiles/auto-720125/proxy-client.key ...
	I1018 12:26:39.194649   52813 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21647-6010/.minikube/profiles/auto-720125/proxy-client.key: {Name:mk548ef445e4b58857c8694e04881f9da155116e Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1018 12:26:39.194883   52813 certs.go:484] found cert: /home/jenkins/minikube-integration/21647-6010/.minikube/certs/9909.pem (1338 bytes)
	W1018 12:26:39.194965   52813 certs.go:480] ignoring /home/jenkins/minikube-integration/21647-6010/.minikube/certs/9909_empty.pem, impossibly tiny 0 bytes
	I1018 12:26:39.194982   52813 certs.go:484] found cert: /home/jenkins/minikube-integration/21647-6010/.minikube/certs/ca-key.pem (1679 bytes)
	I1018 12:26:39.195016   52813 certs.go:484] found cert: /home/jenkins/minikube-integration/21647-6010/.minikube/certs/ca.pem (1082 bytes)
	I1018 12:26:39.195051   52813 certs.go:484] found cert: /home/jenkins/minikube-integration/21647-6010/.minikube/certs/cert.pem (1123 bytes)
	I1018 12:26:39.195083   52813 certs.go:484] found cert: /home/jenkins/minikube-integration/21647-6010/.minikube/certs/key.pem (1679 bytes)
	I1018 12:26:39.195138   52813 certs.go:484] found cert: /home/jenkins/minikube-integration/21647-6010/.minikube/files/etc/ssl/certs/99092.pem (1708 bytes)
	I1018 12:26:39.195753   52813 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21647-6010/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1018 12:26:39.237771   52813 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21647-6010/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I1018 12:26:39.273475   52813 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21647-6010/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1018 12:26:39.304754   52813 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21647-6010/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I1018 12:26:39.340590   52813 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21647-6010/.minikube/profiles/auto-720125/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1415 bytes)
	I1018 12:26:39.375528   52813 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21647-6010/.minikube/profiles/auto-720125/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I1018 12:26:39.408845   52813 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21647-6010/.minikube/profiles/auto-720125/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1018 12:26:39.442920   52813 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21647-6010/.minikube/profiles/auto-720125/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I1018 12:26:39.481085   52813 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21647-6010/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1018 12:26:39.516586   52813 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21647-6010/.minikube/certs/9909.pem --> /usr/share/ca-certificates/9909.pem (1338 bytes)
	I1018 12:26:39.554538   52813 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21647-6010/.minikube/files/etc/ssl/certs/99092.pem --> /usr/share/ca-certificates/99092.pem (1708 bytes)
	I1018 12:26:39.594522   52813 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I1018 12:26:39.619184   52813 ssh_runner.go:195] Run: openssl version
	I1018 12:26:39.626356   52813 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I1018 12:26:39.640801   52813 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1018 12:26:39.646535   52813 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Oct 18 11:29 /usr/share/ca-certificates/minikubeCA.pem
	I1018 12:26:39.646588   52813 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1018 12:26:39.654893   52813 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I1018 12:26:39.669539   52813 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/9909.pem && ln -fs /usr/share/ca-certificates/9909.pem /etc/ssl/certs/9909.pem"
	I1018 12:26:39.684162   52813 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/9909.pem
	I1018 12:26:39.689731   52813 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Oct 18 11:35 /usr/share/ca-certificates/9909.pem
	I1018 12:26:39.689790   52813 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/9909.pem
	I1018 12:26:39.697600   52813 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/9909.pem /etc/ssl/certs/51391683.0"
	I1018 12:26:39.714166   52813 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/99092.pem && ln -fs /usr/share/ca-certificates/99092.pem /etc/ssl/certs/99092.pem"
	I1018 12:26:39.729837   52813 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/99092.pem
	I1018 12:26:39.735419   52813 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Oct 18 11:35 /usr/share/ca-certificates/99092.pem
	I1018 12:26:39.735488   52813 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/99092.pem
	I1018 12:26:39.743203   52813 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/99092.pem /etc/ssl/certs/3ec20f2e.0"
	I1018 12:26:39.758932   52813 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1018 12:26:39.765101   52813 certs.go:400] 'apiserver-kubelet-client' cert doesn't exist, likely first start: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/certs/apiserver-kubelet-client.crt': No such file or directory
	I1018 12:26:39.765169   52813 kubeadm.go:400] StartCluster: {Name:auto-720125 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/21757/minikube-v1.37.0-1760609724-21757-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760609789-21757@sha256:9824b20f4774128fcb298ad0e6cac7649729886cfba9d444b2305c743a5044c6 Memory:3072 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 Clu
sterName:auto-720125 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.72.13 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: Disabl
eOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1018 12:26:39.765332   52813 ssh_runner.go:195] Run: docker ps --filter status=paused --filter=name=k8s_.*_(kube-system)_ --format={{.ID}}
	I1018 12:26:39.785247   52813 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I1018 12:26:39.798374   52813 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I1018 12:26:39.810946   52813 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I1018 12:26:39.825029   52813 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I1018 12:26:39.825056   52813 kubeadm.go:157] found existing configuration files:
	
	I1018 12:26:39.825096   52813 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I1018 12:26:39.836919   52813 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I1018 12:26:39.836997   52813 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I1018 12:26:39.849872   52813 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I1018 12:26:39.861692   52813 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I1018 12:26:39.861767   52813 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I1018 12:26:39.877485   52813 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I1018 12:26:39.890697   52813 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I1018 12:26:39.890777   52813 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I1018 12:26:39.906568   52813 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I1018 12:26:39.920626   52813 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I1018 12:26:39.920740   52813 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I1018 12:26:39.936398   52813 ssh_runner.go:286] Start: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.34.1:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem"
	I1018 12:26:39.998219   52813 kubeadm.go:318] [init] Using Kubernetes version: v1.34.1
	I1018 12:26:39.998340   52813 kubeadm.go:318] [preflight] Running pre-flight checks
	I1018 12:26:40.111469   52813 kubeadm.go:318] [preflight] Pulling images required for setting up a Kubernetes cluster
	I1018 12:26:40.111618   52813 kubeadm.go:318] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I1018 12:26:40.111795   52813 kubeadm.go:318] [preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	I1018 12:26:40.128525   52813 kubeadm.go:318] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I1018 12:26:40.130607   52813 out.go:252]   - Generating certificates and keys ...
	I1018 12:26:40.130710   52813 kubeadm.go:318] [certs] Using existing ca certificate authority
	I1018 12:26:40.130803   52813 kubeadm.go:318] [certs] Using existing apiserver certificate and key on disk
	I1018 12:26:40.350726   52813 kubeadm.go:318] [certs] Generating "apiserver-kubelet-client" certificate and key
	I1018 12:26:40.455768   52813 kubeadm.go:318] [certs] Generating "front-proxy-ca" certificate and key
	I1018 12:26:40.598243   52813 kubeadm.go:318] [certs] Generating "front-proxy-client" certificate and key
	I1018 12:26:41.011504   52813 kubeadm.go:318] [certs] Generating "etcd/ca" certificate and key
	I1018 12:26:41.091757   52813 kubeadm.go:318] [certs] Generating "etcd/server" certificate and key
	I1018 12:26:41.092141   52813 kubeadm.go:318] [certs] etcd/server serving cert is signed for DNS names [auto-720125 localhost] and IPs [192.168.72.13 127.0.0.1 ::1]
	I1018 12:26:41.376370   52813 kubeadm.go:318] [certs] Generating "etcd/peer" certificate and key
	I1018 12:26:41.376756   52813 kubeadm.go:318] [certs] etcd/peer serving cert is signed for DNS names [auto-720125 localhost] and IPs [192.168.72.13 127.0.0.1 ::1]
	I1018 12:26:41.679155   52813 kubeadm.go:318] [certs] Generating "etcd/healthcheck-client" certificate and key
	I1018 12:26:41.832796   52813 kubeadm.go:318] [certs] Generating "apiserver-etcd-client" certificate and key
	I1018 12:26:42.091476   52813 kubeadm.go:318] [certs] Generating "sa" key and public key
	I1018 12:26:42.091617   52813 kubeadm.go:318] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I1018 12:26:42.555206   52813 kubeadm.go:318] [kubeconfig] Writing "admin.conf" kubeconfig file
	I1018 12:26:42.822944   52813 kubeadm.go:318] [kubeconfig] Writing "super-admin.conf" kubeconfig file
	I1018 12:26:43.272107   52813 kubeadm.go:318] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I1018 12:26:43.527688   52813 kubeadm.go:318] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I1018 12:26:43.769537   52813 kubeadm.go:318] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I1018 12:26:43.770332   52813 kubeadm.go:318] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I1018 12:26:43.773363   52813 kubeadm.go:318] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I1018 12:26:39.521607   52283 api_server.go:269] stopped: https://192.168.39.140:8443/healthz: Get "https://192.168.39.140:8443/healthz": read tcp 192.168.39.1:35984->192.168.39.140:8443: read: connection reset by peer
	I1018 12:26:39.521660   52283 api_server.go:253] Checking apiserver healthz at https://192.168.39.140:8443/healthz ...
	I1018 12:26:39.522161   52283 api_server.go:269] stopped: https://192.168.39.140:8443/healthz: Get "https://192.168.39.140:8443/healthz": dial tcp 192.168.39.140:8443: connect: connection refused
	I1018 12:26:39.940469   52283 api_server.go:253] Checking apiserver healthz at https://192.168.39.140:8443/healthz ...
	I1018 12:26:39.941178   52283 api_server.go:269] stopped: https://192.168.39.140:8443/healthz: Get "https://192.168.39.140:8443/healthz": dial tcp 192.168.39.140:8443: connect: connection refused
	I1018 12:26:40.440329   52283 api_server.go:253] Checking apiserver healthz at https://192.168.39.140:8443/healthz ...
	I1018 12:26:40.441012   52283 api_server.go:269] stopped: https://192.168.39.140:8443/healthz: Get "https://192.168.39.140:8443/healthz": dial tcp 192.168.39.140:8443: connect: connection refused
	I1018 12:26:40.940495   52283 api_server.go:253] Checking apiserver healthz at https://192.168.39.140:8443/healthz ...
	I1018 12:26:40.941051   52283 api_server.go:269] stopped: https://192.168.39.140:8443/healthz: Get "https://192.168.39.140:8443/healthz": dial tcp 192.168.39.140:8443: connect: connection refused
	I1018 12:26:41.440547   52283 api_server.go:253] Checking apiserver healthz at https://192.168.39.140:8443/healthz ...
	I1018 12:26:41.441243   52283 api_server.go:269] stopped: https://192.168.39.140:8443/healthz: Get "https://192.168.39.140:8443/healthz": dial tcp 192.168.39.140:8443: connect: connection refused
	I1018 12:26:41.939828   52283 api_server.go:253] Checking apiserver healthz at https://192.168.39.140:8443/healthz ...
	I1018 12:26:41.940532   52283 api_server.go:269] stopped: https://192.168.39.140:8443/healthz: Get "https://192.168.39.140:8443/healthz": dial tcp 192.168.39.140:8443: connect: connection refused
	I1018 12:26:42.440175   52283 api_server.go:253] Checking apiserver healthz at https://192.168.39.140:8443/healthz ...
	I1018 12:26:42.440815   52283 api_server.go:269] stopped: https://192.168.39.140:8443/healthz: Get "https://192.168.39.140:8443/healthz": dial tcp 192.168.39.140:8443: connect: connection refused
	I1018 12:26:42.940483   52283 api_server.go:253] Checking apiserver healthz at https://192.168.39.140:8443/healthz ...
	I1018 12:26:42.941097   52283 api_server.go:269] stopped: https://192.168.39.140:8443/healthz: Get "https://192.168.39.140:8443/healthz": dial tcp 192.168.39.140:8443: connect: connection refused
	I1018 12:26:43.439852   52283 api_server.go:253] Checking apiserver healthz at https://192.168.39.140:8443/healthz ...
	I1018 12:26:43.440639   52283 api_server.go:269] stopped: https://192.168.39.140:8443/healthz: Get "https://192.168.39.140:8443/healthz": dial tcp 192.168.39.140:8443: connect: connection refused
	I1018 12:26:43.940431   52283 api_server.go:253] Checking apiserver healthz at https://192.168.39.140:8443/healthz ...
	I1018 12:26:43.941130   52283 api_server.go:269] stopped: https://192.168.39.140:8443/healthz: Get "https://192.168.39.140:8443/healthz": dial tcp 192.168.39.140:8443: connect: connection refused
	I1018 12:26:39.730484   54024 out.go:252] * Creating kvm2 VM (CPUs=2, Memory=3072MB, Disk=20000MB) ...
	I1018 12:26:39.730631   54024 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_integration/out/docker-machine-driver-kvm2
	I1018 12:26:39.730675   54024 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1018 12:26:39.746220   54024 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:38367
	I1018 12:26:39.746691   54024 main.go:141] libmachine: () Calling .GetVersion
	I1018 12:26:39.747252   54024 main.go:141] libmachine: Using API Version  1
	I1018 12:26:39.747278   54024 main.go:141] libmachine: () Calling .SetConfigRaw
	I1018 12:26:39.747712   54024 main.go:141] libmachine: () Calling .GetMachineName
	I1018 12:26:39.747910   54024 main.go:141] libmachine: (kindnet-720125) Calling .GetMachineName
	I1018 12:26:39.748157   54024 main.go:141] libmachine: (kindnet-720125) Calling .DriverName
	I1018 12:26:39.748327   54024 start.go:159] libmachine.API.Create for "kindnet-720125" (driver="kvm2")
	I1018 12:26:39.748358   54024 client.go:168] LocalClient.Create starting
	I1018 12:26:39.748391   54024 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/21647-6010/.minikube/certs/ca.pem
	I1018 12:26:39.748425   54024 main.go:141] libmachine: Decoding PEM data...
	I1018 12:26:39.748441   54024 main.go:141] libmachine: Parsing certificate...
	I1018 12:26:39.748493   54024 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/21647-6010/.minikube/certs/cert.pem
	I1018 12:26:39.748514   54024 main.go:141] libmachine: Decoding PEM data...
	I1018 12:26:39.748527   54024 main.go:141] libmachine: Parsing certificate...
	I1018 12:26:39.748542   54024 main.go:141] libmachine: Running pre-create checks...
	I1018 12:26:39.748555   54024 main.go:141] libmachine: (kindnet-720125) Calling .PreCreateCheck
	I1018 12:26:39.748883   54024 main.go:141] libmachine: (kindnet-720125) Calling .GetConfigRaw
	I1018 12:26:39.749274   54024 main.go:141] libmachine: Creating machine...
	I1018 12:26:39.749304   54024 main.go:141] libmachine: (kindnet-720125) Calling .Create
	I1018 12:26:39.749445   54024 main.go:141] libmachine: (kindnet-720125) creating domain...
	I1018 12:26:39.749466   54024 main.go:141] libmachine: (kindnet-720125) creating network...
	I1018 12:26:39.750975   54024 main.go:141] libmachine: (kindnet-720125) DBG | found existing default network
	I1018 12:26:39.751279   54024 main.go:141] libmachine: (kindnet-720125) DBG | <network connections='3'>
	I1018 12:26:39.751320   54024 main.go:141] libmachine: (kindnet-720125) DBG |   <name>default</name>
	I1018 12:26:39.751345   54024 main.go:141] libmachine: (kindnet-720125) DBG |   <uuid>c61344c2-dba2-46dd-a21a-34776d235985</uuid>
	I1018 12:26:39.751362   54024 main.go:141] libmachine: (kindnet-720125) DBG |   <forward mode='nat'>
	I1018 12:26:39.751384   54024 main.go:141] libmachine: (kindnet-720125) DBG |     <nat>
	I1018 12:26:39.751398   54024 main.go:141] libmachine: (kindnet-720125) DBG |       <port start='1024' end='65535'/>
	I1018 12:26:39.751406   54024 main.go:141] libmachine: (kindnet-720125) DBG |     </nat>
	I1018 12:26:39.751412   54024 main.go:141] libmachine: (kindnet-720125) DBG |   </forward>
	I1018 12:26:39.751448   54024 main.go:141] libmachine: (kindnet-720125) DBG |   <bridge name='virbr0' stp='on' delay='0'/>
	I1018 12:26:39.751488   54024 main.go:141] libmachine: (kindnet-720125) DBG |   <mac address='52:54:00:10:a2:1d'/>
	I1018 12:26:39.751506   54024 main.go:141] libmachine: (kindnet-720125) DBG |   <ip address='192.168.122.1' netmask='255.255.255.0'>
	I1018 12:26:39.751517   54024 main.go:141] libmachine: (kindnet-720125) DBG |     <dhcp>
	I1018 12:26:39.751527   54024 main.go:141] libmachine: (kindnet-720125) DBG |       <range start='192.168.122.2' end='192.168.122.254'/>
	I1018 12:26:39.751535   54024 main.go:141] libmachine: (kindnet-720125) DBG |     </dhcp>
	I1018 12:26:39.751543   54024 main.go:141] libmachine: (kindnet-720125) DBG |   </ip>
	I1018 12:26:39.751557   54024 main.go:141] libmachine: (kindnet-720125) DBG | </network>
	I1018 12:26:39.751576   54024 main.go:141] libmachine: (kindnet-720125) DBG | 
	I1018 12:26:39.752366   54024 main.go:141] libmachine: (kindnet-720125) DBG | I1018 12:26:39.752168   54053 network.go:211] skipping subnet 192.168.39.0/24 that is taken: &{IP:192.168.39.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.39.0/24 Gateway:192.168.39.1 ClientMin:192.168.39.2 ClientMax:192.168.39.254 Broadcast:192.168.39.255 IsPrivate:true Interface:{IfaceName:virbr1 IfaceIPv4:192.168.39.1 IfaceMTU:1500 IfaceMAC:52:54:00:82:24:f4} reservation:<nil>}
	I1018 12:26:39.753108   54024 main.go:141] libmachine: (kindnet-720125) DBG | I1018 12:26:39.753033   54053 network.go:206] using free private subnet 192.168.50.0/24: &{IP:192.168.50.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.50.0/24 Gateway:192.168.50.1 ClientMin:192.168.50.2 ClientMax:192.168.50.254 Broadcast:192.168.50.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:0xc000260370}
	I1018 12:26:39.753127   54024 main.go:141] libmachine: (kindnet-720125) DBG | defining private network:
	I1018 12:26:39.753137   54024 main.go:141] libmachine: (kindnet-720125) DBG | 
	I1018 12:26:39.753143   54024 main.go:141] libmachine: (kindnet-720125) DBG | <network>
	I1018 12:26:39.753152   54024 main.go:141] libmachine: (kindnet-720125) DBG |   <name>mk-kindnet-720125</name>
	I1018 12:26:39.753159   54024 main.go:141] libmachine: (kindnet-720125) DBG |   <dns enable='no'/>
	I1018 12:26:39.753168   54024 main.go:141] libmachine: (kindnet-720125) DBG |   <ip address='192.168.50.1' netmask='255.255.255.0'>
	I1018 12:26:39.753175   54024 main.go:141] libmachine: (kindnet-720125) DBG |     <dhcp>
	I1018 12:26:39.753184   54024 main.go:141] libmachine: (kindnet-720125) DBG |       <range start='192.168.50.2' end='192.168.50.253'/>
	I1018 12:26:39.753190   54024 main.go:141] libmachine: (kindnet-720125) DBG |     </dhcp>
	I1018 12:26:39.753213   54024 main.go:141] libmachine: (kindnet-720125) DBG |   </ip>
	I1018 12:26:39.753246   54024 main.go:141] libmachine: (kindnet-720125) DBG | </network>
	I1018 12:26:39.753262   54024 main.go:141] libmachine: (kindnet-720125) DBG | 
	I1018 12:26:39.759190   54024 main.go:141] libmachine: (kindnet-720125) DBG | creating private network mk-kindnet-720125 192.168.50.0/24...
	I1018 12:26:39.842530   54024 main.go:141] libmachine: (kindnet-720125) DBG | private network mk-kindnet-720125 192.168.50.0/24 created
	I1018 12:26:39.842829   54024 main.go:141] libmachine: (kindnet-720125) DBG | <network>
	I1018 12:26:39.842844   54024 main.go:141] libmachine: (kindnet-720125) DBG |   <name>mk-kindnet-720125</name>
	I1018 12:26:39.842855   54024 main.go:141] libmachine: (kindnet-720125) DBG |   <uuid>57af09bd-510d-4d07-b5da-0d64b9c8c775</uuid>
	I1018 12:26:39.842865   54024 main.go:141] libmachine: (kindnet-720125) setting up store path in /home/jenkins/minikube-integration/21647-6010/.minikube/machines/kindnet-720125 ...
	I1018 12:26:39.842873   54024 main.go:141] libmachine: (kindnet-720125) DBG |   <bridge name='virbr2' stp='on' delay='0'/>
	I1018 12:26:39.842883   54024 main.go:141] libmachine: (kindnet-720125) DBG |   <mac address='52:54:00:4a:b8:f3'/>
	I1018 12:26:39.842890   54024 main.go:141] libmachine: (kindnet-720125) DBG |   <dns enable='no'/>
	I1018 12:26:39.842900   54024 main.go:141] libmachine: (kindnet-720125) DBG |   <ip address='192.168.50.1' netmask='255.255.255.0'>
	I1018 12:26:39.842912   54024 main.go:141] libmachine: (kindnet-720125) DBG |     <dhcp>
	I1018 12:26:39.842920   54024 main.go:141] libmachine: (kindnet-720125) DBG |       <range start='192.168.50.2' end='192.168.50.253'/>
	I1018 12:26:39.842926   54024 main.go:141] libmachine: (kindnet-720125) DBG |     </dhcp>
	I1018 12:26:39.842937   54024 main.go:141] libmachine: (kindnet-720125) building disk image from file:///home/jenkins/minikube-integration/21647-6010/.minikube/cache/iso/amd64/minikube-v1.37.0-1760609724-21757-amd64.iso
	I1018 12:26:39.842947   54024 main.go:141] libmachine: (kindnet-720125) DBG |   </ip>
	I1018 12:26:39.842958   54024 main.go:141] libmachine: (kindnet-720125) DBG | </network>
	I1018 12:26:39.842975   54024 main.go:141] libmachine: (kindnet-720125) Downloading /home/jenkins/minikube-integration/21647-6010/.minikube/cache/boot2docker.iso from file:///home/jenkins/minikube-integration/21647-6010/.minikube/cache/iso/amd64/minikube-v1.37.0-1760609724-21757-amd64.iso...
	I1018 12:26:39.842995   54024 main.go:141] libmachine: (kindnet-720125) DBG | 
	I1018 12:26:39.843018   54024 main.go:141] libmachine: (kindnet-720125) DBG | I1018 12:26:39.842834   54053 common.go:144] Making disk image using store path: /home/jenkins/minikube-integration/21647-6010/.minikube
	I1018 12:26:40.099390   54024 main.go:141] libmachine: (kindnet-720125) DBG | I1018 12:26:40.099247   54053 common.go:151] Creating ssh key: /home/jenkins/minikube-integration/21647-6010/.minikube/machines/kindnet-720125/id_rsa...
	I1018 12:26:40.381985   54024 main.go:141] libmachine: (kindnet-720125) DBG | I1018 12:26:40.381830   54053 common.go:157] Creating raw disk image: /home/jenkins/minikube-integration/21647-6010/.minikube/machines/kindnet-720125/kindnet-720125.rawdisk...
	I1018 12:26:40.382025   54024 main.go:141] libmachine: (kindnet-720125) DBG | Writing magic tar header
	I1018 12:26:40.382039   54024 main.go:141] libmachine: (kindnet-720125) DBG | Writing SSH key tar header
	I1018 12:26:40.382049   54024 main.go:141] libmachine: (kindnet-720125) DBG | I1018 12:26:40.381994   54053 common.go:171] Fixing permissions on /home/jenkins/minikube-integration/21647-6010/.minikube/machines/kindnet-720125 ...
	I1018 12:26:40.382145   54024 main.go:141] libmachine: (kindnet-720125) DBG | checking permissions on dir: /home/jenkins/minikube-integration/21647-6010/.minikube/machines/kindnet-720125
	I1018 12:26:40.382185   54024 main.go:141] libmachine: (kindnet-720125) setting executable bit set on /home/jenkins/minikube-integration/21647-6010/.minikube/machines/kindnet-720125 (perms=drwx------)
	I1018 12:26:40.382204   54024 main.go:141] libmachine: (kindnet-720125) setting executable bit set on /home/jenkins/minikube-integration/21647-6010/.minikube/machines (perms=drwxr-xr-x)
	I1018 12:26:40.382225   54024 main.go:141] libmachine: (kindnet-720125) DBG | checking permissions on dir: /home/jenkins/minikube-integration/21647-6010/.minikube/machines
	I1018 12:26:40.382245   54024 main.go:141] libmachine: (kindnet-720125) DBG | checking permissions on dir: /home/jenkins/minikube-integration/21647-6010/.minikube
	I1018 12:26:40.382257   54024 main.go:141] libmachine: (kindnet-720125) DBG | checking permissions on dir: /home/jenkins/minikube-integration/21647-6010
	I1018 12:26:40.382268   54024 main.go:141] libmachine: (kindnet-720125) DBG | checking permissions on dir: /home/jenkins/minikube-integration
	I1018 12:26:40.382278   54024 main.go:141] libmachine: (kindnet-720125) DBG | checking permissions on dir: /home/jenkins
	I1018 12:26:40.382302   54024 main.go:141] libmachine: (kindnet-720125) setting executable bit set on /home/jenkins/minikube-integration/21647-6010/.minikube (perms=drwxr-xr-x)
	I1018 12:26:40.382314   54024 main.go:141] libmachine: (kindnet-720125) setting executable bit set on /home/jenkins/minikube-integration/21647-6010 (perms=drwxrwxr-x)
	I1018 12:26:40.382334   54024 main.go:141] libmachine: (kindnet-720125) DBG | checking permissions on dir: /home
	I1018 12:26:40.382345   54024 main.go:141] libmachine: (kindnet-720125) DBG | skipping /home - not owner
	I1018 12:26:40.382356   54024 main.go:141] libmachine: (kindnet-720125) setting executable bit set on /home/jenkins/minikube-integration (perms=drwxrwxr-x)
	I1018 12:26:40.382367   54024 main.go:141] libmachine: (kindnet-720125) setting executable bit set on /home/jenkins (perms=drwxr-xr-x)
	I1018 12:26:40.382376   54024 main.go:141] libmachine: (kindnet-720125) defining domain...
	I1018 12:26:40.383798   54024 main.go:141] libmachine: (kindnet-720125) defining domain using XML: 
	I1018 12:26:40.383831   54024 main.go:141] libmachine: (kindnet-720125) <domain type='kvm'>
	I1018 12:26:40.383842   54024 main.go:141] libmachine: (kindnet-720125)   <name>kindnet-720125</name>
	I1018 12:26:40.383853   54024 main.go:141] libmachine: (kindnet-720125)   <memory unit='MiB'>3072</memory>
	I1018 12:26:40.383858   54024 main.go:141] libmachine: (kindnet-720125)   <vcpu>2</vcpu>
	I1018 12:26:40.383862   54024 main.go:141] libmachine: (kindnet-720125)   <features>
	I1018 12:26:40.383867   54024 main.go:141] libmachine: (kindnet-720125)     <acpi/>
	I1018 12:26:40.383875   54024 main.go:141] libmachine: (kindnet-720125)     <apic/>
	I1018 12:26:40.383882   54024 main.go:141] libmachine: (kindnet-720125)     <pae/>
	I1018 12:26:40.383886   54024 main.go:141] libmachine: (kindnet-720125)   </features>
	I1018 12:26:40.383891   54024 main.go:141] libmachine: (kindnet-720125)   <cpu mode='host-passthrough'>
	I1018 12:26:40.383898   54024 main.go:141] libmachine: (kindnet-720125)   </cpu>
	I1018 12:26:40.383905   54024 main.go:141] libmachine: (kindnet-720125)   <os>
	I1018 12:26:40.383916   54024 main.go:141] libmachine: (kindnet-720125)     <type>hvm</type>
	I1018 12:26:40.383924   54024 main.go:141] libmachine: (kindnet-720125)     <boot dev='cdrom'/>
	I1018 12:26:40.383934   54024 main.go:141] libmachine: (kindnet-720125)     <boot dev='hd'/>
	I1018 12:26:40.383944   54024 main.go:141] libmachine: (kindnet-720125)     <bootmenu enable='no'/>
	I1018 12:26:40.383948   54024 main.go:141] libmachine: (kindnet-720125)   </os>
	I1018 12:26:40.383953   54024 main.go:141] libmachine: (kindnet-720125)   <devices>
	I1018 12:26:40.383957   54024 main.go:141] libmachine: (kindnet-720125)     <disk type='file' device='cdrom'>
	I1018 12:26:40.383997   54024 main.go:141] libmachine: (kindnet-720125)       <source file='/home/jenkins/minikube-integration/21647-6010/.minikube/machines/kindnet-720125/boot2docker.iso'/>
	I1018 12:26:40.384023   54024 main.go:141] libmachine: (kindnet-720125)       <target dev='hdc' bus='scsi'/>
	I1018 12:26:40.384037   54024 main.go:141] libmachine: (kindnet-720125)       <readonly/>
	I1018 12:26:40.384051   54024 main.go:141] libmachine: (kindnet-720125)     </disk>
	I1018 12:26:40.384065   54024 main.go:141] libmachine: (kindnet-720125)     <disk type='file' device='disk'>
	I1018 12:26:40.384079   54024 main.go:141] libmachine: (kindnet-720125)       <driver name='qemu' type='raw' cache='default' io='threads' />
	I1018 12:26:40.384096   54024 main.go:141] libmachine: (kindnet-720125)       <source file='/home/jenkins/minikube-integration/21647-6010/.minikube/machines/kindnet-720125/kindnet-720125.rawdisk'/>
	I1018 12:26:40.384108   54024 main.go:141] libmachine: (kindnet-720125)       <target dev='hda' bus='virtio'/>
	I1018 12:26:40.384119   54024 main.go:141] libmachine: (kindnet-720125)     </disk>
	I1018 12:26:40.384133   54024 main.go:141] libmachine: (kindnet-720125)     <interface type='network'>
	I1018 12:26:40.384146   54024 main.go:141] libmachine: (kindnet-720125)       <source network='mk-kindnet-720125'/>
	I1018 12:26:40.384157   54024 main.go:141] libmachine: (kindnet-720125)       <model type='virtio'/>
	I1018 12:26:40.384168   54024 main.go:141] libmachine: (kindnet-720125)     </interface>
	I1018 12:26:40.384179   54024 main.go:141] libmachine: (kindnet-720125)     <interface type='network'>
	I1018 12:26:40.384192   54024 main.go:141] libmachine: (kindnet-720125)       <source network='default'/>
	I1018 12:26:40.384202   54024 main.go:141] libmachine: (kindnet-720125)       <model type='virtio'/>
	I1018 12:26:40.384216   54024 main.go:141] libmachine: (kindnet-720125)     </interface>
	I1018 12:26:40.384230   54024 main.go:141] libmachine: (kindnet-720125)     <serial type='pty'>
	I1018 12:26:40.384236   54024 main.go:141] libmachine: (kindnet-720125)       <target port='0'/>
	I1018 12:26:40.384245   54024 main.go:141] libmachine: (kindnet-720125)     </serial>
	I1018 12:26:40.384254   54024 main.go:141] libmachine: (kindnet-720125)     <console type='pty'>
	I1018 12:26:40.384266   54024 main.go:141] libmachine: (kindnet-720125)       <target type='serial' port='0'/>
	I1018 12:26:40.384277   54024 main.go:141] libmachine: (kindnet-720125)     </console>
	I1018 12:26:40.384304   54024 main.go:141] libmachine: (kindnet-720125)     <rng model='virtio'>
	I1018 12:26:40.384323   54024 main.go:141] libmachine: (kindnet-720125)       <backend model='random'>/dev/random</backend>
	I1018 12:26:40.384332   54024 main.go:141] libmachine: (kindnet-720125)     </rng>
	I1018 12:26:40.384340   54024 main.go:141] libmachine: (kindnet-720125)   </devices>
	I1018 12:26:40.384354   54024 main.go:141] libmachine: (kindnet-720125) </domain>
	I1018 12:26:40.384364   54024 main.go:141] libmachine: (kindnet-720125) 
	I1018 12:26:40.388970   54024 main.go:141] libmachine: (kindnet-720125) DBG | domain kindnet-720125 has defined MAC address 52:54:00:3f:a0:78 in network default
	I1018 12:26:40.389652   54024 main.go:141] libmachine: (kindnet-720125) starting domain...
	I1018 12:26:40.389680   54024 main.go:141] libmachine: (kindnet-720125) DBG | domain kindnet-720125 has defined MAC address 52:54:00:0e:b7:f4 in network mk-kindnet-720125
	I1018 12:26:40.389688   54024 main.go:141] libmachine: (kindnet-720125) ensuring networks are active...
	I1018 12:26:40.390420   54024 main.go:141] libmachine: (kindnet-720125) Ensuring network default is active
	I1018 12:26:40.390825   54024 main.go:141] libmachine: (kindnet-720125) Ensuring network mk-kindnet-720125 is active
	I1018 12:26:40.391737   54024 main.go:141] libmachine: (kindnet-720125) getting domain XML...
	I1018 12:26:40.393514   54024 main.go:141] libmachine: (kindnet-720125) DBG | starting domain XML:
	I1018 12:26:40.393530   54024 main.go:141] libmachine: (kindnet-720125) DBG | <domain type='kvm'>
	I1018 12:26:40.393539   54024 main.go:141] libmachine: (kindnet-720125) DBG |   <name>kindnet-720125</name>
	I1018 12:26:40.393548   54024 main.go:141] libmachine: (kindnet-720125) DBG |   <uuid>d3c666c7-5967-40a8-9b36-6cfb4dcc1fb1</uuid>
	I1018 12:26:40.393556   54024 main.go:141] libmachine: (kindnet-720125) DBG |   <memory unit='KiB'>3145728</memory>
	I1018 12:26:40.393564   54024 main.go:141] libmachine: (kindnet-720125) DBG |   <currentMemory unit='KiB'>3145728</currentMemory>
	I1018 12:26:40.393573   54024 main.go:141] libmachine: (kindnet-720125) DBG |   <vcpu placement='static'>2</vcpu>
	I1018 12:26:40.393580   54024 main.go:141] libmachine: (kindnet-720125) DBG |   <os>
	I1018 12:26:40.393593   54024 main.go:141] libmachine: (kindnet-720125) DBG |     <type arch='x86_64' machine='pc-i440fx-jammy'>hvm</type>
	I1018 12:26:40.393629   54024 main.go:141] libmachine: (kindnet-720125) DBG |     <boot dev='cdrom'/>
	I1018 12:26:40.393654   54024 main.go:141] libmachine: (kindnet-720125) DBG |     <boot dev='hd'/>
	I1018 12:26:40.393666   54024 main.go:141] libmachine: (kindnet-720125) DBG |     <bootmenu enable='no'/>
	I1018 12:26:40.393675   54024 main.go:141] libmachine: (kindnet-720125) DBG |   </os>
	I1018 12:26:40.393682   54024 main.go:141] libmachine: (kindnet-720125) DBG |   <features>
	I1018 12:26:40.393690   54024 main.go:141] libmachine: (kindnet-720125) DBG |     <acpi/>
	I1018 12:26:40.393698   54024 main.go:141] libmachine: (kindnet-720125) DBG |     <apic/>
	I1018 12:26:40.393707   54024 main.go:141] libmachine: (kindnet-720125) DBG |     <pae/>
	I1018 12:26:40.393717   54024 main.go:141] libmachine: (kindnet-720125) DBG |   </features>
	I1018 12:26:40.393726   54024 main.go:141] libmachine: (kindnet-720125) DBG |   <cpu mode='host-passthrough' check='none' migratable='on'/>
	I1018 12:26:40.393736   54024 main.go:141] libmachine: (kindnet-720125) DBG |   <clock offset='utc'/>
	I1018 12:26:40.393745   54024 main.go:141] libmachine: (kindnet-720125) DBG |   <on_poweroff>destroy</on_poweroff>
	I1018 12:26:40.393755   54024 main.go:141] libmachine: (kindnet-720125) DBG |   <on_reboot>restart</on_reboot>
	I1018 12:26:40.393764   54024 main.go:141] libmachine: (kindnet-720125) DBG |   <on_crash>destroy</on_crash>
	I1018 12:26:40.393774   54024 main.go:141] libmachine: (kindnet-720125) DBG |   <devices>
	I1018 12:26:40.393805   54024 main.go:141] libmachine: (kindnet-720125) DBG |     <emulator>/usr/bin/qemu-system-x86_64</emulator>
	I1018 12:26:40.393828   54024 main.go:141] libmachine: (kindnet-720125) DBG |     <disk type='file' device='cdrom'>
	I1018 12:26:40.393841   54024 main.go:141] libmachine: (kindnet-720125) DBG |       <driver name='qemu' type='raw'/>
	I1018 12:26:40.393857   54024 main.go:141] libmachine: (kindnet-720125) DBG |       <source file='/home/jenkins/minikube-integration/21647-6010/.minikube/machines/kindnet-720125/boot2docker.iso'/>
	I1018 12:26:40.393871   54024 main.go:141] libmachine: (kindnet-720125) DBG |       <target dev='hdc' bus='scsi'/>
	I1018 12:26:40.393896   54024 main.go:141] libmachine: (kindnet-720125) DBG |       <readonly/>
	I1018 12:26:40.393912   54024 main.go:141] libmachine: (kindnet-720125) DBG |       <address type='drive' controller='0' bus='0' target='0' unit='2'/>
	I1018 12:26:40.393927   54024 main.go:141] libmachine: (kindnet-720125) DBG |     </disk>
	I1018 12:26:40.393940   54024 main.go:141] libmachine: (kindnet-720125) DBG |     <disk type='file' device='disk'>
	I1018 12:26:40.393952   54024 main.go:141] libmachine: (kindnet-720125) DBG |       <driver name='qemu' type='raw' io='threads'/>
	I1018 12:26:40.393965   54024 main.go:141] libmachine: (kindnet-720125) DBG |       <source file='/home/jenkins/minikube-integration/21647-6010/.minikube/machines/kindnet-720125/kindnet-720125.rawdisk'/>
	I1018 12:26:40.393971   54024 main.go:141] libmachine: (kindnet-720125) DBG |       <target dev='hda' bus='virtio'/>
	I1018 12:26:40.393982   54024 main.go:141] libmachine: (kindnet-720125) DBG |       <address type='pci' domain='0x0000' bus='0x00' slot='0x05' function='0x0'/>
	I1018 12:26:40.393987   54024 main.go:141] libmachine: (kindnet-720125) DBG |     </disk>
	I1018 12:26:40.393996   54024 main.go:141] libmachine: (kindnet-720125) DBG |     <controller type='usb' index='0' model='piix3-uhci'>
	I1018 12:26:40.394012   54024 main.go:141] libmachine: (kindnet-720125) DBG |       <address type='pci' domain='0x0000' bus='0x00' slot='0x01' function='0x2'/>
	I1018 12:26:40.394022   54024 main.go:141] libmachine: (kindnet-720125) DBG |     </controller>
	I1018 12:26:40.394034   54024 main.go:141] libmachine: (kindnet-720125) DBG |     <controller type='pci' index='0' model='pci-root'/>
	I1018 12:26:40.394049   54024 main.go:141] libmachine: (kindnet-720125) DBG |     <controller type='scsi' index='0' model='lsilogic'>
	I1018 12:26:40.394062   54024 main.go:141] libmachine: (kindnet-720125) DBG |       <address type='pci' domain='0x0000' bus='0x00' slot='0x04' function='0x0'/>
	I1018 12:26:40.394074   54024 main.go:141] libmachine: (kindnet-720125) DBG |     </controller>
	I1018 12:26:40.394090   54024 main.go:141] libmachine: (kindnet-720125) DBG |     <interface type='network'>
	I1018 12:26:40.394101   54024 main.go:141] libmachine: (kindnet-720125) DBG |       <mac address='52:54:00:0e:b7:f4'/>
	I1018 12:26:40.394112   54024 main.go:141] libmachine: (kindnet-720125) DBG |       <source network='mk-kindnet-720125'/>
	I1018 12:26:40.394129   54024 main.go:141] libmachine: (kindnet-720125) DBG |       <model type='virtio'/>
	I1018 12:26:40.394144   54024 main.go:141] libmachine: (kindnet-720125) DBG |       <address type='pci' domain='0x0000' bus='0x00' slot='0x02' function='0x0'/>
	I1018 12:26:40.394159   54024 main.go:141] libmachine: (kindnet-720125) DBG |     </interface>
	I1018 12:26:40.394175   54024 main.go:141] libmachine: (kindnet-720125) DBG |     <interface type='network'>
	I1018 12:26:40.394193   54024 main.go:141] libmachine: (kindnet-720125) DBG |       <mac address='52:54:00:3f:a0:78'/>
	I1018 12:26:40.394204   54024 main.go:141] libmachine: (kindnet-720125) DBG |       <source network='default'/>
	I1018 12:26:40.394215   54024 main.go:141] libmachine: (kindnet-720125) DBG |       <model type='virtio'/>
	I1018 12:26:40.394226   54024 main.go:141] libmachine: (kindnet-720125) DBG |       <address type='pci' domain='0x0000' bus='0x00' slot='0x03' function='0x0'/>
	I1018 12:26:40.394235   54024 main.go:141] libmachine: (kindnet-720125) DBG |     </interface>
	I1018 12:26:40.394244   54024 main.go:141] libmachine: (kindnet-720125) DBG |     <serial type='pty'>
	I1018 12:26:40.394254   54024 main.go:141] libmachine: (kindnet-720125) DBG |       <target type='isa-serial' port='0'>
	I1018 12:26:40.394281   54024 main.go:141] libmachine: (kindnet-720125) DBG |         <model name='isa-serial'/>
	I1018 12:26:40.394319   54024 main.go:141] libmachine: (kindnet-720125) DBG |       </target>
	I1018 12:26:40.394338   54024 main.go:141] libmachine: (kindnet-720125) DBG |     </serial>
	I1018 12:26:40.394356   54024 main.go:141] libmachine: (kindnet-720125) DBG |     <console type='pty'>
	I1018 12:26:40.394370   54024 main.go:141] libmachine: (kindnet-720125) DBG |       <target type='serial' port='0'/>
	I1018 12:26:40.394380   54024 main.go:141] libmachine: (kindnet-720125) DBG |     </console>
	I1018 12:26:40.394393   54024 main.go:141] libmachine: (kindnet-720125) DBG |     <input type='mouse' bus='ps2'/>
	I1018 12:26:40.394402   54024 main.go:141] libmachine: (kindnet-720125) DBG |     <input type='keyboard' bus='ps2'/>
	I1018 12:26:40.394415   54024 main.go:141] libmachine: (kindnet-720125) DBG |     <audio id='1' type='none'/>
	I1018 12:26:40.394423   54024 main.go:141] libmachine: (kindnet-720125) DBG |     <memballoon model='virtio'>
	I1018 12:26:40.394443   54024 main.go:141] libmachine: (kindnet-720125) DBG |       <address type='pci' domain='0x0000' bus='0x00' slot='0x06' function='0x0'/>
	I1018 12:26:40.394459   54024 main.go:141] libmachine: (kindnet-720125) DBG |     </memballoon>
	I1018 12:26:40.394470   54024 main.go:141] libmachine: (kindnet-720125) DBG |     <rng model='virtio'>
	I1018 12:26:40.394482   54024 main.go:141] libmachine: (kindnet-720125) DBG |       <backend model='random'>/dev/random</backend>
	I1018 12:26:40.394496   54024 main.go:141] libmachine: (kindnet-720125) DBG |       <address type='pci' domain='0x0000' bus='0x00' slot='0x07' function='0x0'/>
	I1018 12:26:40.394505   54024 main.go:141] libmachine: (kindnet-720125) DBG |     </rng>
	I1018 12:26:40.394513   54024 main.go:141] libmachine: (kindnet-720125) DBG |   </devices>
	I1018 12:26:40.394522   54024 main.go:141] libmachine: (kindnet-720125) DBG | </domain>
	I1018 12:26:40.394542   54024 main.go:141] libmachine: (kindnet-720125) DBG | 
	I1018 12:26:41.782659   54024 main.go:141] libmachine: (kindnet-720125) waiting for domain to start...
	I1018 12:26:41.784057   54024 main.go:141] libmachine: (kindnet-720125) domain is now running
	I1018 12:26:41.784080   54024 main.go:141] libmachine: (kindnet-720125) waiting for IP...
	I1018 12:26:41.784831   54024 main.go:141] libmachine: (kindnet-720125) DBG | domain kindnet-720125 has defined MAC address 52:54:00:0e:b7:f4 in network mk-kindnet-720125
	I1018 12:26:41.785431   54024 main.go:141] libmachine: (kindnet-720125) DBG | no network interface addresses found for domain kindnet-720125 (source=lease)
	I1018 12:26:41.785459   54024 main.go:141] libmachine: (kindnet-720125) DBG | trying to list again with source=arp
	I1018 12:26:41.785812   54024 main.go:141] libmachine: (kindnet-720125) DBG | unable to find current IP address of domain kindnet-720125 in network mk-kindnet-720125 (interfaces detected: [])
	I1018 12:26:41.785887   54024 main.go:141] libmachine: (kindnet-720125) DBG | I1018 12:26:41.785810   54053 retry.go:31] will retry after 204.388807ms: waiting for domain to come up
	I1018 12:26:41.992592   54024 main.go:141] libmachine: (kindnet-720125) DBG | domain kindnet-720125 has defined MAC address 52:54:00:0e:b7:f4 in network mk-kindnet-720125
	I1018 12:26:41.993377   54024 main.go:141] libmachine: (kindnet-720125) DBG | no network interface addresses found for domain kindnet-720125 (source=lease)
	I1018 12:26:41.993404   54024 main.go:141] libmachine: (kindnet-720125) DBG | trying to list again with source=arp
	I1018 12:26:41.993817   54024 main.go:141] libmachine: (kindnet-720125) DBG | unable to find current IP address of domain kindnet-720125 in network mk-kindnet-720125 (interfaces detected: [])
	I1018 12:26:41.993887   54024 main.go:141] libmachine: (kindnet-720125) DBG | I1018 12:26:41.993817   54053 retry.go:31] will retry after 374.842513ms: waiting for domain to come up
	I1018 12:26:42.370189   54024 main.go:141] libmachine: (kindnet-720125) DBG | domain kindnet-720125 has defined MAC address 52:54:00:0e:b7:f4 in network mk-kindnet-720125
	I1018 12:26:42.370750   54024 main.go:141] libmachine: (kindnet-720125) DBG | no network interface addresses found for domain kindnet-720125 (source=lease)
	I1018 12:26:42.370778   54024 main.go:141] libmachine: (kindnet-720125) DBG | trying to list again with source=arp
	I1018 12:26:42.371199   54024 main.go:141] libmachine: (kindnet-720125) DBG | unable to find current IP address of domain kindnet-720125 in network mk-kindnet-720125 (interfaces detected: [])
	I1018 12:26:42.371231   54024 main.go:141] libmachine: (kindnet-720125) DBG | I1018 12:26:42.371171   54053 retry.go:31] will retry after 382.206082ms: waiting for domain to come up
	I1018 12:26:42.755732   54024 main.go:141] libmachine: (kindnet-720125) DBG | domain kindnet-720125 has defined MAC address 52:54:00:0e:b7:f4 in network mk-kindnet-720125
	I1018 12:26:42.756456   54024 main.go:141] libmachine: (kindnet-720125) DBG | no network interface addresses found for domain kindnet-720125 (source=lease)
	I1018 12:26:42.756481   54024 main.go:141] libmachine: (kindnet-720125) DBG | trying to list again with source=arp
	I1018 12:26:42.756848   54024 main.go:141] libmachine: (kindnet-720125) DBG | unable to find current IP address of domain kindnet-720125 in network mk-kindnet-720125 (interfaces detected: [])
	I1018 12:26:42.756877   54024 main.go:141] libmachine: (kindnet-720125) DBG | I1018 12:26:42.756832   54053 retry.go:31] will retry after 434.513358ms: waiting for domain to come up
	I1018 12:26:43.192495   54024 main.go:141] libmachine: (kindnet-720125) DBG | domain kindnet-720125 has defined MAC address 52:54:00:0e:b7:f4 in network mk-kindnet-720125
	I1018 12:26:43.193112   54024 main.go:141] libmachine: (kindnet-720125) DBG | no network interface addresses found for domain kindnet-720125 (source=lease)
	I1018 12:26:43.193137   54024 main.go:141] libmachine: (kindnet-720125) DBG | trying to list again with source=arp
	I1018 12:26:43.193557   54024 main.go:141] libmachine: (kindnet-720125) DBG | unable to find current IP address of domain kindnet-720125 in network mk-kindnet-720125 (interfaces detected: [])
	I1018 12:26:43.193584   54024 main.go:141] libmachine: (kindnet-720125) DBG | I1018 12:26:43.193492   54053 retry.go:31] will retry after 622.396959ms: waiting for domain to come up
	I1018 12:26:43.818233   54024 main.go:141] libmachine: (kindnet-720125) DBG | domain kindnet-720125 has defined MAC address 52:54:00:0e:b7:f4 in network mk-kindnet-720125
	I1018 12:26:43.819067   54024 main.go:141] libmachine: (kindnet-720125) DBG | no network interface addresses found for domain kindnet-720125 (source=lease)
	I1018 12:26:43.819104   54024 main.go:141] libmachine: (kindnet-720125) DBG | trying to list again with source=arp
	I1018 12:26:43.819584   54024 main.go:141] libmachine: (kindnet-720125) DBG | unable to find current IP address of domain kindnet-720125 in network mk-kindnet-720125 (interfaces detected: [])
	I1018 12:26:43.819616   54024 main.go:141] libmachine: (kindnet-720125) DBG | I1018 12:26:43.819536   54053 retry.go:31] will retry after 815.894877ms: waiting for domain to come up
	I1018 12:26:44.636575   54024 main.go:141] libmachine: (kindnet-720125) DBG | domain kindnet-720125 has defined MAC address 52:54:00:0e:b7:f4 in network mk-kindnet-720125
	I1018 12:26:44.637323   54024 main.go:141] libmachine: (kindnet-720125) DBG | no network interface addresses found for domain kindnet-720125 (source=lease)
	I1018 12:26:44.637353   54024 main.go:141] libmachine: (kindnet-720125) DBG | trying to list again with source=arp
	I1018 12:26:44.637721   54024 main.go:141] libmachine: (kindnet-720125) DBG | unable to find current IP address of domain kindnet-720125 in network mk-kindnet-720125 (interfaces detected: [])
	I1018 12:26:44.637759   54024 main.go:141] libmachine: (kindnet-720125) DBG | I1018 12:26:44.637705   54053 retry.go:31] will retry after 1.067259778s: waiting for domain to come up
	I1018 12:26:43.775588   52813 out.go:252]   - Booting up control plane ...
	I1018 12:26:43.775698   52813 kubeadm.go:318] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I1018 12:26:43.775800   52813 kubeadm.go:318] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I1018 12:26:43.777341   52813 kubeadm.go:318] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I1018 12:26:43.800502   52813 kubeadm.go:318] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I1018 12:26:43.800688   52813 kubeadm.go:318] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	I1018 12:26:43.808677   52813 kubeadm.go:318] [patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	I1018 12:26:43.808867   52813 kubeadm.go:318] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I1018 12:26:43.809016   52813 kubeadm.go:318] [kubelet-start] Starting the kubelet
	I1018 12:26:43.996155   52813 kubeadm.go:318] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	I1018 12:26:43.996352   52813 kubeadm.go:318] [kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	I1018 12:26:44.997230   52813 kubeadm.go:318] [kubelet-check] The kubelet is healthy after 1.001669295s
	I1018 12:26:45.000531   52813 kubeadm.go:318] [control-plane-check] Waiting for healthy control plane components. This can take up to 4m0s
	I1018 12:26:45.000667   52813 kubeadm.go:318] [control-plane-check] Checking kube-apiserver at https://192.168.72.13:8443/livez
	I1018 12:26:45.000814   52813 kubeadm.go:318] [control-plane-check] Checking kube-controller-manager at https://127.0.0.1:10257/healthz
	I1018 12:26:45.000947   52813 kubeadm.go:318] [control-plane-check] Checking kube-scheduler at https://127.0.0.1:10259/livez
	I1018 12:26:44.439803   52283 api_server.go:253] Checking apiserver healthz at https://192.168.39.140:8443/healthz ...
	I1018 12:26:44.440530   52283 api_server.go:269] stopped: https://192.168.39.140:8443/healthz: Get "https://192.168.39.140:8443/healthz": dial tcp 192.168.39.140:8443: connect: connection refused
	I1018 12:26:44.940153   52283 api_server.go:253] Checking apiserver healthz at https://192.168.39.140:8443/healthz ...
	I1018 12:26:44.940832   52283 api_server.go:269] stopped: https://192.168.39.140:8443/healthz: Get "https://192.168.39.140:8443/healthz": dial tcp 192.168.39.140:8443: connect: connection refused
	I1018 12:26:45.439761   52283 api_server.go:253] Checking apiserver healthz at https://192.168.39.140:8443/healthz ...
	I1018 12:26:45.440519   52283 api_server.go:269] stopped: https://192.168.39.140:8443/healthz: Get "https://192.168.39.140:8443/healthz": dial tcp 192.168.39.140:8443: connect: connection refused
	I1018 12:26:45.940122   52283 api_server.go:253] Checking apiserver healthz at https://192.168.39.140:8443/healthz ...
	I1018 12:26:45.940844   52283 api_server.go:269] stopped: https://192.168.39.140:8443/healthz: Get "https://192.168.39.140:8443/healthz": dial tcp 192.168.39.140:8443: connect: connection refused
	I1018 12:26:46.439543   52283 api_server.go:253] Checking apiserver healthz at https://192.168.39.140:8443/healthz ...
	I1018 12:26:46.440225   52283 api_server.go:269] stopped: https://192.168.39.140:8443/healthz: Get "https://192.168.39.140:8443/healthz": dial tcp 192.168.39.140:8443: connect: connection refused
	I1018 12:26:46.939926   52283 api_server.go:253] Checking apiserver healthz at https://192.168.39.140:8443/healthz ...
	I1018 12:26:46.940690   52283 api_server.go:269] stopped: https://192.168.39.140:8443/healthz: Get "https://192.168.39.140:8443/healthz": dial tcp 192.168.39.140:8443: connect: connection refused
	I1018 12:26:47.440072   52283 api_server.go:253] Checking apiserver healthz at https://192.168.39.140:8443/healthz ...
	I1018 12:26:47.440765   52283 api_server.go:269] stopped: https://192.168.39.140:8443/healthz: Get "https://192.168.39.140:8443/healthz": dial tcp 192.168.39.140:8443: connect: connection refused
	I1018 12:26:47.940122   52283 api_server.go:253] Checking apiserver healthz at https://192.168.39.140:8443/healthz ...
	I1018 12:26:47.940902   52283 api_server.go:269] stopped: https://192.168.39.140:8443/healthz: Get "https://192.168.39.140:8443/healthz": dial tcp 192.168.39.140:8443: connect: connection refused
	I1018 12:26:48.440476   52283 api_server.go:253] Checking apiserver healthz at https://192.168.39.140:8443/healthz ...
	I1018 12:26:48.441175   52283 api_server.go:269] stopped: https://192.168.39.140:8443/healthz: Get "https://192.168.39.140:8443/healthz": dial tcp 192.168.39.140:8443: connect: connection refused
	I1018 12:26:48.940453   52283 api_server.go:253] Checking apiserver healthz at https://192.168.39.140:8443/healthz ...
	I1018 12:26:48.941104   52283 api_server.go:269] stopped: https://192.168.39.140:8443/healthz: Get "https://192.168.39.140:8443/healthz": dial tcp 192.168.39.140:8443: connect: connection refused
	
	
	==> Docker <==
	Oct 18 12:25:51 default-k8s-diff-port-948988 cri-dockerd[1540]: time="2025-10-18T12:25:51Z" level=info msg="Will attempt to re-write config file /var/lib/docker/containers/2bf7782642e4711b15be6d3ec08d29a271276dc02c8b8205befe59a7505897ae/resolv.conf as [nameserver 192.168.122.1]"
	Oct 18 12:25:53 default-k8s-diff-port-948988 cri-dockerd[1540]: time="2025-10-18T12:25:53Z" level=info msg="Will attempt to re-write config file /var/lib/docker/containers/307c8c80145ed27dca61950ef5cf63b804994215fc5f4759617dd3e150ef2cfa/resolv.conf as [nameserver 10.96.0.10 search kubernetes-dashboard.svc.cluster.local svc.cluster.local cluster.local options ndots:5]"
	Oct 18 12:25:53 default-k8s-diff-port-948988 cri-dockerd[1540]: time="2025-10-18T12:25:53Z" level=info msg="Will attempt to re-write config file /var/lib/docker/containers/22320121e1a756b48dc7f5c15a1a3cb7252ccd513e0ab07d47c606f58c53f0f0/resolv.conf as [nameserver 10.96.0.10 search kubernetes-dashboard.svc.cluster.local svc.cluster.local cluster.local options ndots:5]"
	Oct 18 12:25:54 default-k8s-diff-port-948988 dockerd[1170]: time="2025-10-18T12:25:54.120729117Z" level=warning msg="reference for unknown type: application/vnd.docker.distribution.manifest.v1+prettyjws" digest="sha256:5d99aa1120524c801bc8c1a7077e8f5ec122ba16b6dda1a5d3826057f67b9bcb" remote="registry.k8s.io/echoserver:1.4"
	Oct 18 12:25:54 default-k8s-diff-port-948988 dockerd[1170]: time="2025-10-18T12:25:54.212112555Z" level=warning msg="Error persisting manifest" digest="sha256:5d99aa1120524c801bc8c1a7077e8f5ec122ba16b6dda1a5d3826057f67b9bcb" error="error committing manifest to content store: commit failed: unexpected commit digest sha256:eaee4c452b076cdb05b391ed7e75e1ad0aca136665875ab5d7e2f3d9f4675769, expected sha256:5d99aa1120524c801bc8c1a7077e8f5ec122ba16b6dda1a5d3826057f67b9bcb: failed precondition" remote="registry.k8s.io/echoserver:1.4"
	Oct 18 12:25:54 default-k8s-diff-port-948988 dockerd[1170]: time="2025-10-18T12:25:54.212342190Z" level=info msg="Attempting next endpoint for pull after error: Docker Image Format v1 and Docker Image manifest version 2, schema 1 support has been removed. Suggest the author of registry.k8s.io/echoserver:1.4 to upgrade the image to the OCI Format or Docker Image manifest v2, schema 2. More information at https://docs.docker.com/go/deprecated-image-specs/"
	Oct 18 12:25:54 default-k8s-diff-port-948988 cri-dockerd[1540]: time="2025-10-18T12:25:54Z" level=info msg="Stop pulling image registry.k8s.io/echoserver:1.4: 1.4: Pulling from echoserver"
	Oct 18 12:25:54 default-k8s-diff-port-948988 dockerd[1170]: time="2025-10-18T12:25:54.421865126Z" level=warning msg="reference for unknown type: " digest="sha256:2e500d29e9d5f4a086b908eb8dfe7ecac57d2ab09d65b24f588b1d449841ef93" remote="docker.io/kubernetesui/dashboard@sha256:2e500d29e9d5f4a086b908eb8dfe7ecac57d2ab09d65b24f588b1d449841ef93"
	Oct 18 12:26:02 default-k8s-diff-port-948988 cri-dockerd[1540]: time="2025-10-18T12:26:02Z" level=info msg="Stop pulling image docker.io/kubernetesui/dashboard:v2.7.0@sha256:2e500d29e9d5f4a086b908eb8dfe7ecac57d2ab09d65b24f588b1d449841ef93: Status: Downloaded newer image for kubernetesui/dashboard@sha256:2e500d29e9d5f4a086b908eb8dfe7ecac57d2ab09d65b24f588b1d449841ef93"
	Oct 18 12:26:06 default-k8s-diff-port-948988 dockerd[1170]: time="2025-10-18T12:26:06.830994794Z" level=warning msg="reference for unknown type: application/vnd.docker.distribution.manifest.v1+prettyjws" digest="sha256:5d99aa1120524c801bc8c1a7077e8f5ec122ba16b6dda1a5d3826057f67b9bcb" remote="registry.k8s.io/echoserver:1.4"
	Oct 18 12:26:06 default-k8s-diff-port-948988 dockerd[1170]: time="2025-10-18T12:26:06.904996286Z" level=warning msg="Error persisting manifest" digest="sha256:5d99aa1120524c801bc8c1a7077e8f5ec122ba16b6dda1a5d3826057f67b9bcb" error="error committing manifest to content store: commit failed: unexpected commit digest sha256:eaee4c452b076cdb05b391ed7e75e1ad0aca136665875ab5d7e2f3d9f4675769, expected sha256:5d99aa1120524c801bc8c1a7077e8f5ec122ba16b6dda1a5d3826057f67b9bcb: failed precondition" remote="registry.k8s.io/echoserver:1.4"
	Oct 18 12:26:06 default-k8s-diff-port-948988 dockerd[1170]: time="2025-10-18T12:26:06.905088942Z" level=info msg="Attempting next endpoint for pull after error: Docker Image Format v1 and Docker Image manifest version 2, schema 1 support has been removed. Suggest the author of registry.k8s.io/echoserver:1.4 to upgrade the image to the OCI Format or Docker Image manifest v2, schema 2. More information at https://docs.docker.com/go/deprecated-image-specs/"
	Oct 18 12:26:06 default-k8s-diff-port-948988 cri-dockerd[1540]: time="2025-10-18T12:26:06Z" level=info msg="Stop pulling image registry.k8s.io/echoserver:1.4: 1.4: Pulling from echoserver"
	Oct 18 12:26:06 default-k8s-diff-port-948988 dockerd[1170]: time="2025-10-18T12:26:06.919653355Z" level=warning msg="Error getting v2 registry: Get \"https://fake.domain/v2/\": dial tcp: lookup fake.domain on 192.168.122.1:53: no such host"
	Oct 18 12:26:06 default-k8s-diff-port-948988 dockerd[1170]: time="2025-10-18T12:26:06.919692389Z" level=info msg="Attempting next endpoint for pull after error: Get \"https://fake.domain/v2/\": dial tcp: lookup fake.domain on 192.168.122.1:53: no such host"
	Oct 18 12:26:06 default-k8s-diff-port-948988 dockerd[1170]: time="2025-10-18T12:26:06.923070969Z" level=error msg="unexpected HTTP error handling" error="<nil>"
	Oct 18 12:26:06 default-k8s-diff-port-948988 dockerd[1170]: time="2025-10-18T12:26:06.924597650Z" level=error msg="Handler for POST /v1.46/images/create returned error: Get \"https://fake.domain/v2/\": dial tcp: lookup fake.domain on 192.168.122.1:53: no such host"
	Oct 18 12:26:14 default-k8s-diff-port-948988 dockerd[1170]: time="2025-10-18T12:26:14.766195371Z" level=info msg="ignoring event" container=28ffefdfcaefaa0dcc5a6077bf470cdb9475d6e21b7a7d96be86de74a8777734 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Oct 18 12:26:48 default-k8s-diff-port-948988 cri-dockerd[1540]: time="2025-10-18T12:26:48Z" level=info msg="Failed to read pod IP from plugin/docker: networkPlugin cni failed on the status hook for pod \"coredns-66bc5c9577-jc7tz_kube-system\": CNI failed to retrieve network namespace path: cannot find network namespace for the terminated container \"50ccc6bf5c1dc8dbc44839aac4aaf80b91e88cfa36a35e71c99ecbc99a5d2efb\""
	Oct 18 12:26:48 default-k8s-diff-port-948988 cri-dockerd[1540]: time="2025-10-18T12:26:48Z" level=info msg="Docker cri received runtime config &RuntimeConfig{NetworkConfig:&NetworkConfig{PodCidr:10.244.0.0/24,},}"
	Oct 18 12:26:49 default-k8s-diff-port-948988 dockerd[1170]: time="2025-10-18T12:26:49.579823134Z" level=warning msg="Error getting v2 registry: Get \"https://fake.domain/v2/\": dial tcp: lookup fake.domain on 192.168.122.1:53: no such host"
	Oct 18 12:26:49 default-k8s-diff-port-948988 dockerd[1170]: time="2025-10-18T12:26:49.579851904Z" level=info msg="Attempting next endpoint for pull after error: Get \"https://fake.domain/v2/\": dial tcp: lookup fake.domain on 192.168.122.1:53: no such host"
	Oct 18 12:26:49 default-k8s-diff-port-948988 dockerd[1170]: time="2025-10-18T12:26:49.584080633Z" level=error msg="unexpected HTTP error handling" error="<nil>"
	Oct 18 12:26:49 default-k8s-diff-port-948988 dockerd[1170]: time="2025-10-18T12:26:49.584132115Z" level=error msg="Handler for POST /v1.46/images/create returned error: Get \"https://fake.domain/v2/\": dial tcp: lookup fake.domain on 192.168.122.1:53: no such host"
	Oct 18 12:26:49 default-k8s-diff-port-948988 dockerd[1170]: time="2025-10-18T12:26:49.670933568Z" level=warning msg="reference for unknown type: application/vnd.docker.distribution.manifest.v1+prettyjws" digest="sha256:5d99aa1120524c801bc8c1a7077e8f5ec122ba16b6dda1a5d3826057f67b9bcb" remote="registry.k8s.io/echoserver:1.4"
	
	
	==> container status <==
	CONTAINER           IMAGE                                                                                                 CREATED              STATE               NAME                      ATTEMPT             POD ID              POD
	3a2c1a468e77b       kubernetesui/dashboard@sha256:2e500d29e9d5f4a086b908eb8dfe7ecac57d2ab09d65b24f588b1d449841ef93        48 seconds ago       Running             kubernetes-dashboard      0                   22320121e1a75       kubernetes-dashboard-855c9754f9-8frzf
	14a606bd02ea2       52546a367cc9e                                                                                         58 seconds ago       Running             coredns                   1                   2bf7782642e47       coredns-66bc5c9577-s7znr
	3181063a95749       56cc512116c8f                                                                                         58 seconds ago       Running             busybox                   1                   f01a1904eab6f       busybox
	28ffefdfcaefa       6e38f40d628db                                                                                         About a minute ago   Exited              storage-provisioner       1                   002d263a57e06       storage-provisioner
	e74b601e6b20b       fc25172553d79                                                                                         About a minute ago   Running             kube-proxy                1                   5916362f7151c       kube-proxy-hmf6q
	aa45133c5292e       7dd6aaa1717ab                                                                                         About a minute ago   Running             kube-scheduler            1                   c386eff006256       kube-scheduler-default-k8s-diff-port-948988
	0d33563cfd415       5f1f5298c888d                                                                                         About a minute ago   Running             etcd                      1                   aa5a738a016e1       etcd-default-k8s-diff-port-948988
	482f645840fbd       c3994bc696102                                                                                         About a minute ago   Running             kube-apiserver            1                   6d80f3bf62181       kube-apiserver-default-k8s-diff-port-948988
	cbcb65b91df5f       c80c8dbafe7dd                                                                                         About a minute ago   Running             kube-controller-manager   1                   9b74e777c1d81       kube-controller-manager-default-k8s-diff-port-948988
	06b0d6a0fe73a       gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e   About a minute ago   Exited              busybox                   0                   02768f34f11ea       busybox
	bf61d222c7e61       52546a367cc9e                                                                                         2 minutes ago        Exited              coredns                   0                   4a9e23fe5352b       coredns-66bc5c9577-s7znr
	72d0dd1b3e6d1       fc25172553d79                                                                                         2 minutes ago        Exited              kube-proxy                0                   3b1b31ff39772       kube-proxy-hmf6q
	ac171ed99aa7b       7dd6aaa1717ab                                                                                         2 minutes ago        Exited              kube-scheduler            0                   27f94a06346ec       kube-scheduler-default-k8s-diff-port-948988
	07dc691cd2b41       c80c8dbafe7dd                                                                                         2 minutes ago        Exited              kube-controller-manager   0                   7c2c9ab301ac9       kube-controller-manager-default-k8s-diff-port-948988
	5a3d271b1a7a4       5f1f5298c888d                                                                                         2 minutes ago        Exited              etcd                      0                   7776a7d62b3b1       etcd-default-k8s-diff-port-948988
	5dfc625534d2e       c3994bc696102                                                                                         2 minutes ago        Exited              kube-apiserver            0                   20ac876b72a06       kube-apiserver-default-k8s-diff-port-948988
	
	
	==> coredns [14a606bd02ea] <==
	maxprocs: Leaving GOMAXPROCS=2: CPU quota undefined
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = 8b8641eae0af5337389aa76a78f71d2e2a7bd54cc199277be5abe199aebbfd3c9e156259680c91eb397a4c282437fd35af249d42857043b32bf3beb690ad2f54
	CoreDNS-1.12.1
	linux/amd64, go1.24.1, 707c7c1
	[INFO] 127.0.0.1:47328 - 15007 "HINFO IN 5766678739025722613.5866360335637854453. udp 57 false 512" NXDOMAIN qr,rd,ra 132 0.103273346s
	
	
	==> coredns [bf61d222c7e6] <==
	maxprocs: Leaving GOMAXPROCS=2: CPU quota undefined
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = 8b8641eae0af5337389aa76a78f71d2e2a7bd54cc199277be5abe199aebbfd3c9e156259680c91eb397a4c282437fd35af249d42857043b32bf3beb690ad2f54
	CoreDNS-1.12.1
	linux/amd64, go1.24.1, 707c7c1
	[INFO] 127.0.0.1:48576 - 64076 "HINFO IN 6932009071857870960.7176900972779109838. udp 57 false 512" NXDOMAIN qr,rd,ra 132 0.13763s
	[INFO] SIGTERM: Shutting down servers then terminating
	[INFO] plugin/health: Going into lameduck mode for 5s
	
	
	==> describe nodes <==
	Name:               default-k8s-diff-port-948988
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=default-k8s-diff-port-948988
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=6a5d4c9cccb1ce5842ff2f1e7c0db9c10e4246ee
	                    minikube.k8s.io/name=default-k8s-diff-port-948988
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2025_10_18T12_24_33_0700
	                    minikube.k8s.io/version=v1.37.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Sat, 18 Oct 2025 12:24:29 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  default-k8s-diff-port-948988
	  AcquireTime:     <unset>
	  RenewTime:       Sat, 18 Oct 2025 12:26:48 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Sat, 18 Oct 2025 12:26:48 +0000   Sat, 18 Oct 2025 12:24:26 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Sat, 18 Oct 2025 12:26:48 +0000   Sat, 18 Oct 2025 12:24:26 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Sat, 18 Oct 2025 12:26:48 +0000   Sat, 18 Oct 2025 12:24:26 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Sat, 18 Oct 2025 12:26:48 +0000   Sat, 18 Oct 2025 12:25:53 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.61.154
	  Hostname:    default-k8s-diff-port-948988
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             3042712Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             3042712Ki
	  pods:               110
	System Info:
	  Machine ID:                 d7b095482f0f4bd294376564492aae84
	  System UUID:                d7b09548-2f0f-4bd2-9437-6564492aae84
	  Boot ID:                    5dbb338e-d666-4176-8009-ddf389982046
	  Kernel Version:             6.6.95
	  OS Image:                   Buildroot 2025.02
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  docker://28.5.1
	  Kubelet Version:            v1.34.1
	  Kube-Proxy Version:         
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (11 in total)
	  Namespace                   Name                                                    CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                                    ------------  ----------  ---------------  -------------  ---
	  default                     busybox                                                 0 (0%)        0 (0%)      0 (0%)           0 (0%)         2m3s
	  kube-system                 coredns-66bc5c9577-s7znr                                100m (5%)     0 (0%)      70Mi (2%)        170Mi (5%)     2m11s
	  kube-system                 etcd-default-k8s-diff-port-948988                       100m (5%)     0 (0%)      100Mi (3%)       0 (0%)         2m19s
	  kube-system                 kube-apiserver-default-k8s-diff-port-948988             250m (12%)    0 (0%)      0 (0%)           0 (0%)         2m19s
	  kube-system                 kube-controller-manager-default-k8s-diff-port-948988    200m (10%)    0 (0%)      0 (0%)           0 (0%)         2m19s
	  kube-system                 kube-proxy-hmf6q                                        0 (0%)        0 (0%)      0 (0%)           0 (0%)         2m13s
	  kube-system                 kube-scheduler-default-k8s-diff-port-948988             100m (5%)     0 (0%)      0 (0%)           0 (0%)         2m20s
	  kube-system                 metrics-server-746fcd58dc-7788d                         100m (5%)     0 (0%)      200Mi (6%)       0 (0%)         112s
	  kube-system                 storage-provisioner                                     0 (0%)        0 (0%)      0 (0%)           0 (0%)         2m11s
	  kubernetes-dashboard        dashboard-metrics-scraper-6ffb444bf9-gxs6s              0 (0%)        0 (0%)      0 (0%)           0 (0%)         64s
	  kubernetes-dashboard        kubernetes-dashboard-855c9754f9-8frzf                   0 (0%)        0 (0%)      0 (0%)           0 (0%)         64s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests     Limits
	  --------           --------     ------
	  cpu                850m (42%)   0 (0%)
	  memory             370Mi (12%)  170Mi (5%)
	  ephemeral-storage  0 (0%)       0 (0%)
	  hugepages-2Mi      0 (0%)       0 (0%)
	Events:
	  Type     Reason                   Age                    From             Message
	  ----     ------                   ----                   ----             -------
	  Normal   Starting                 2m9s                   kube-proxy       
	  Normal   Starting                 64s                    kube-proxy       
	  Normal   Starting                 2m27s                  kubelet          Starting kubelet.
	  Normal   NodeHasSufficientMemory  2m26s (x8 over 2m26s)  kubelet          Node default-k8s-diff-port-948988 status is now: NodeHasSufficientMemory
	  Normal   NodeHasNoDiskPressure    2m26s (x8 over 2m26s)  kubelet          Node default-k8s-diff-port-948988 status is now: NodeHasNoDiskPressure
	  Normal   NodeHasSufficientPID     2m26s (x7 over 2m26s)  kubelet          Node default-k8s-diff-port-948988 status is now: NodeHasSufficientPID
	  Normal   NodeAllocatableEnforced  2m26s                  kubelet          Updated Node Allocatable limit across pods
	  Normal   Starting                 2m19s                  kubelet          Starting kubelet.
	  Normal   NodeAllocatableEnforced  2m19s                  kubelet          Updated Node Allocatable limit across pods
	  Normal   NodeHasSufficientMemory  2m19s                  kubelet          Node default-k8s-diff-port-948988 status is now: NodeHasSufficientMemory
	  Normal   NodeHasNoDiskPressure    2m19s                  kubelet          Node default-k8s-diff-port-948988 status is now: NodeHasNoDiskPressure
	  Normal   NodeHasSufficientPID     2m19s                  kubelet          Node default-k8s-diff-port-948988 status is now: NodeHasSufficientPID
	  Normal   NodeReady                2m15s                  kubelet          Node default-k8s-diff-port-948988 status is now: NodeReady
	  Normal   RegisteredNode           2m14s                  node-controller  Node default-k8s-diff-port-948988 event: Registered Node default-k8s-diff-port-948988 in Controller
	  Normal   Starting                 73s                    kubelet          Starting kubelet.
	  Normal   NodeHasSufficientMemory  73s (x8 over 73s)      kubelet          Node default-k8s-diff-port-948988 status is now: NodeHasSufficientMemory
	  Normal   NodeHasNoDiskPressure    73s (x8 over 73s)      kubelet          Node default-k8s-diff-port-948988 status is now: NodeHasNoDiskPressure
	  Normal   NodeHasSufficientPID     73s (x7 over 73s)      kubelet          Node default-k8s-diff-port-948988 status is now: NodeHasSufficientPID
	  Normal   NodeAllocatableEnforced  73s                    kubelet          Updated Node Allocatable limit across pods
	  Warning  Rebooted                 69s                    kubelet          Node default-k8s-diff-port-948988 has been rebooted, boot id: 5dbb338e-d666-4176-8009-ddf389982046
	  Normal   RegisteredNode           65s                    node-controller  Node default-k8s-diff-port-948988 event: Registered Node default-k8s-diff-port-948988 in Controller
	  Normal   Starting                 3s                     kubelet          Starting kubelet.
	  Normal   NodeAllocatableEnforced  3s                     kubelet          Updated Node Allocatable limit across pods
	  Normal   NodeHasSufficientMemory  3s                     kubelet          Node default-k8s-diff-port-948988 status is now: NodeHasSufficientMemory
	  Normal   NodeHasNoDiskPressure    3s                     kubelet          Node default-k8s-diff-port-948988 status is now: NodeHasNoDiskPressure
	  Normal   NodeHasSufficientPID     3s                     kubelet          Node default-k8s-diff-port-948988 status is now: NodeHasSufficientPID
	
	
	==> dmesg <==
	[Oct18 12:25] Booted with the nomodeset parameter. Only the system framebuffer will be available
	[  +0.000011] acpi PNP0A03:00: fail to add MMCONFIG information, can't access extended configuration space under this bridge
	[  +0.001590] platform regulatory.0: Direct firmware load for regulatory.db failed with error -2
	[  +0.004075] (rpcbind)[119]: rpcbind.service: Referenced but unset environment variable evaluates to an empty string: RPCBIND_OPTIONS
	[  +0.931702] NFSD: Using /var/lib/nfs/v4recovery as the NFSv4 state recovery directory
	[  +0.000018] NFSD: unable to find recovery directory /var/lib/nfs/v4recovery
	[  +0.000004] NFSD: Unable to initialize client recovery tracking! (-2)
	[  +0.130272] kauditd_printk_skb: 1 callbacks suppressed
	[  +0.102368] kauditd_printk_skb: 449 callbacks suppressed
	[  +5.669077] kauditd_printk_skb: 165 callbacks suppressed
	[  +5.952206] kauditd_printk_skb: 134 callbacks suppressed
	[  +2.969146] kauditd_printk_skb: 264 callbacks suppressed
	[Oct18 12:26] kauditd_printk_skb: 11 callbacks suppressed
	[  +0.224441] kauditd_printk_skb: 35 callbacks suppressed
	
	
	==> etcd [0d33563cfd41] <==
	{"level":"info","ts":"2025-10-18T12:26:50.186827Z","caller":"traceutil/trace.go:172","msg":"trace[1372174769] range","detail":"{range_begin:/registry/health; range_end:; response_count:0; response_revision:752; }","duration":"399.841982ms","start":"2025-10-18T12:26:49.786974Z","end":"2025-10-18T12:26:50.186816Z","steps":["trace[1372174769] 'range keys from in-memory index tree'  (duration: 399.699339ms)"],"step_count":1}
	{"level":"warn","ts":"2025-10-18T12:26:50.186874Z","caller":"v3rpc/interceptor.go:202","msg":"request stats","start time":"2025-10-18T12:26:49.786955Z","time spent":"399.895498ms","remote":"127.0.0.1:58530","response type":"/etcdserverpb.KV/Range","request count":0,"request size":18,"response count":0,"response size":27,"request content":"key:\"/registry/health\" "}
	{"level":"info","ts":"2025-10-18T12:26:50.333810Z","caller":"traceutil/trace.go:172","msg":"trace[111824645] linearizableReadLoop","detail":"{readStateIndex:805; appliedIndex:805; }","duration":"469.70081ms","start":"2025-10-18T12:26:49.864083Z","end":"2025-10-18T12:26:50.333784Z","steps":["trace[111824645] 'read index received'  (duration: 469.662848ms)","trace[111824645] 'applied index is now lower than readState.Index'  (duration: 36.562µs)"],"step_count":2}
	{"level":"warn","ts":"2025-10-18T12:26:50.333966Z","caller":"txn/util.go:93","msg":"apply request took too long","took":"469.888536ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/flowschemas\" limit:1 ","response":"range_response_count:0 size:5"}
	{"level":"info","ts":"2025-10-18T12:26:50.334000Z","caller":"traceutil/trace.go:172","msg":"trace[512175939] range","detail":"{range_begin:/registry/flowschemas; range_end:; response_count:0; response_revision:752; }","duration":"469.93891ms","start":"2025-10-18T12:26:49.864053Z","end":"2025-10-18T12:26:50.333992Z","steps":["trace[512175939] 'agreement among raft nodes before linearized reading'  (duration: 469.85272ms)"],"step_count":1}
	{"level":"warn","ts":"2025-10-18T12:26:50.334133Z","caller":"v3rpc/interceptor.go:202","msg":"request stats","start time":"2025-10-18T12:26:49.864029Z","time spent":"469.995ms","remote":"127.0.0.1:59436","response type":"/etcdserverpb.KV/Range","request count":0,"request size":25,"response count":0,"response size":27,"request content":"key:\"/registry/flowschemas\" limit:1 "}
	{"level":"info","ts":"2025-10-18T12:26:50.334869Z","caller":"traceutil/trace.go:172","msg":"trace[1055338688] transaction","detail":"{read_only:false; number_of_response:0; response_revision:752; }","duration":"495.901712ms","start":"2025-10-18T12:26:49.838955Z","end":"2025-10-18T12:26:50.334857Z","steps":["trace[1055338688] 'process raft request'  (duration: 495.716875ms)"],"step_count":1}
	{"level":"warn","ts":"2025-10-18T12:26:50.335648Z","caller":"v3rpc/interceptor.go:202","msg":"request stats","start time":"2025-10-18T12:26:49.838929Z","time spent":"495.989792ms","remote":"127.0.0.1:58854","response type":"/etcdserverpb.KV/Txn","request count":0,"request size":0,"response count":0,"response size":27,"request content":"compare:<target:MOD key:\"/registry/pods/kube-system/kube-scheduler-default-k8s-diff-port-948988\" mod_revision:0 > success:<request_put:<key:\"/registry/pods/kube-system/kube-scheduler-default-k8s-diff-port-948988\" value_size:3336 >> failure:<>"}
	{"level":"info","ts":"2025-10-18T12:26:50.443549Z","caller":"traceutil/trace.go:172","msg":"trace[381001447] linearizableReadLoop","detail":"{readStateIndex:806; appliedIndex:806; }","duration":"109.522762ms","start":"2025-10-18T12:26:50.333879Z","end":"2025-10-18T12:26:50.443401Z","steps":["trace[381001447] 'read index received'  (duration: 109.304835ms)","trace[381001447] 'applied index is now lower than readState.Index'  (duration: 216.349µs)"],"step_count":2}
	{"level":"warn","ts":"2025-10-18T12:26:50.443898Z","caller":"txn/util.go:93","msg":"apply request took too long","took":"254.661283ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/health\" ","response":"range_response_count:0 size:5"}
	{"level":"info","ts":"2025-10-18T12:26:50.444087Z","caller":"traceutil/trace.go:172","msg":"trace[269629089] range","detail":"{range_begin:/registry/health; range_end:; response_count:0; response_revision:752; }","duration":"254.861648ms","start":"2025-10-18T12:26:50.189213Z","end":"2025-10-18T12:26:50.444075Z","steps":["trace[269629089] 'agreement among raft nodes before linearized reading'  (duration: 254.569015ms)"],"step_count":1}
	{"level":"info","ts":"2025-10-18T12:26:50.444986Z","caller":"traceutil/trace.go:172","msg":"trace[1424081342] transaction","detail":"{read_only:false; number_of_response:0; response_revision:752; }","duration":"604.238859ms","start":"2025-10-18T12:26:49.840736Z","end":"2025-10-18T12:26:50.444975Z","steps":["trace[1424081342] 'process raft request'  (duration: 603.242308ms)"],"step_count":1}
	{"level":"warn","ts":"2025-10-18T12:26:50.445058Z","caller":"txn/util.go:93","msg":"apply request took too long","took":"481.542092ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/certificatesigningrequests\" limit:1 ","response":"range_response_count:0 size:5"}
	{"level":"warn","ts":"2025-10-18T12:26:50.445075Z","caller":"v3rpc/interceptor.go:202","msg":"request stats","start time":"2025-10-18T12:26:49.840723Z","time spent":"604.304586ms","remote":"127.0.0.1:58854","response type":"/etcdserverpb.KV/Txn","request count":0,"request size":0,"response count":0,"response size":27,"request content":"compare:<target:MOD key:\"/registry/pods/kube-system/kube-controller-manager-default-k8s-diff-port-948988\" mod_revision:0 > success:<request_put:<key:\"/registry/pods/kube-system/kube-controller-manager-default-k8s-diff-port-948988\" value_size:5080 >> failure:<>"}
	{"level":"info","ts":"2025-10-18T12:26:50.445122Z","caller":"traceutil/trace.go:172","msg":"trace[399968637] range","detail":"{range_begin:/registry/certificatesigningrequests; range_end:; response_count:0; response_revision:752; }","duration":"481.574042ms","start":"2025-10-18T12:26:49.963502Z","end":"2025-10-18T12:26:50.445076Z","steps":["trace[399968637] 'agreement among raft nodes before linearized reading'  (duration: 481.324719ms)"],"step_count":1}
	{"level":"warn","ts":"2025-10-18T12:26:50.445200Z","caller":"v3rpc/interceptor.go:202","msg":"request stats","start time":"2025-10-18T12:26:49.963483Z","time spent":"481.704642ms","remote":"127.0.0.1:58990","response type":"/etcdserverpb.KV/Range","request count":0,"request size":40,"response count":0,"response size":27,"request content":"key:\"/registry/certificatesigningrequests\" limit:1 "}
	{"level":"info","ts":"2025-10-18T12:26:50.446712Z","caller":"traceutil/trace.go:172","msg":"trace[824860143] transaction","detail":"{read_only:false; number_of_response:0; response_revision:752; }","duration":"606.054697ms","start":"2025-10-18T12:26:49.840601Z","end":"2025-10-18T12:26:50.446656Z","steps":["trace[824860143] 'process raft request'  (duration: 603.007187ms)"],"step_count":1}
	{"level":"warn","ts":"2025-10-18T12:26:50.446779Z","caller":"v3rpc/interceptor.go:202","msg":"request stats","start time":"2025-10-18T12:26:49.840584Z","time spent":"606.160126ms","remote":"127.0.0.1:58854","response type":"/etcdserverpb.KV/Txn","request count":0,"request size":0,"response count":0,"response size":27,"request content":"compare:<target:MOD key:\"/registry/pods/kube-system/kube-apiserver-default-k8s-diff-port-948988\" mod_revision:0 > success:<request_put:<key:\"/registry/pods/kube-system/kube-apiserver-default-k8s-diff-port-948988\" value_size:5531 >> failure:<>"}
	{"level":"info","ts":"2025-10-18T12:26:50.446897Z","caller":"traceutil/trace.go:172","msg":"trace[1942397087] transaction","detail":"{read_only:false; number_of_response:0; response_revision:752; }","duration":"606.190325ms","start":"2025-10-18T12:26:49.840699Z","end":"2025-10-18T12:26:50.446890Z","steps":["trace[1942397087] 'process raft request'  (duration: 603.239357ms)"],"step_count":1}
	{"level":"warn","ts":"2025-10-18T12:26:50.446935Z","caller":"v3rpc/interceptor.go:202","msg":"request stats","start time":"2025-10-18T12:26:49.840694Z","time spent":"606.222506ms","remote":"127.0.0.1:58854","response type":"/etcdserverpb.KV/Txn","request count":0,"request size":0,"response count":0,"response size":27,"request content":"compare:<target:MOD key:\"/registry/pods/kube-system/etcd-default-k8s-diff-port-948988\" mod_revision:0 > success:<request_put:<key:\"/registry/pods/kube-system/etcd-default-k8s-diff-port-948988\" value_size:4413 >> failure:<>"}
	{"level":"warn","ts":"2025-10-18T12:26:50.446998Z","caller":"txn/util.go:93","msg":"apply request took too long","took":"106.548699ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/pods/kube-system/kube-scheduler-default-k8s-diff-port-948988\" limit:1 ","response":"range_response_count:1 size:4976"}
	{"level":"info","ts":"2025-10-18T12:26:50.447420Z","caller":"traceutil/trace.go:172","msg":"trace[673088281] range","detail":"{range_begin:/registry/pods/kube-system/kube-scheduler-default-k8s-diff-port-948988; range_end:; response_count:1; response_revision:753; }","duration":"106.587183ms","start":"2025-10-18T12:26:50.340430Z","end":"2025-10-18T12:26:50.447017Z","steps":["trace[673088281] 'agreement among raft nodes before linearized reading'  (duration: 106.46749ms)"],"step_count":1}
	{"level":"info","ts":"2025-10-18T12:26:50.448436Z","caller":"traceutil/trace.go:172","msg":"trace[1596410668] transaction","detail":"{read_only:false; response_revision:753; number_of_response:1; }","duration":"250.464751ms","start":"2025-10-18T12:26:50.197959Z","end":"2025-10-18T12:26:50.448424Z","steps":["trace[1596410668] 'process raft request'  (duration: 246.217803ms)"],"step_count":1}
	{"level":"warn","ts":"2025-10-18T12:26:50.448558Z","caller":"txn/util.go:93","msg":"apply request took too long","took":"100.631999ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/runtimeclasses\" limit:1 ","response":"range_response_count:0 size:5"}
	{"level":"info","ts":"2025-10-18T12:26:50.448589Z","caller":"traceutil/trace.go:172","msg":"trace[1722869229] range","detail":"{range_begin:/registry/runtimeclasses; range_end:; response_count:0; response_revision:753; }","duration":"100.661173ms","start":"2025-10-18T12:26:50.347914Z","end":"2025-10-18T12:26:50.448575Z","steps":["trace[1722869229] 'agreement among raft nodes before linearized reading'  (duration: 100.605021ms)"],"step_count":1}
	
	
	==> etcd [5a3d271b1a7a] <==
	{"level":"info","ts":"2025-10-18T12:24:40.137898Z","caller":"traceutil/trace.go:172","msg":"trace[1031995627] transaction","detail":"{read_only:false; response_revision:365; number_of_response:1; }","duration":"153.504515ms","start":"2025-10-18T12:24:39.984387Z","end":"2025-10-18T12:24:40.137891Z","steps":["trace[1031995627] 'process raft request'  (duration: 106.790781ms)","trace[1031995627] 'compare'  (duration: 46.286033ms)"],"step_count":2}
	{"level":"info","ts":"2025-10-18T12:24:40.138807Z","caller":"traceutil/trace.go:172","msg":"trace[2073145057] transaction","detail":"{read_only:false; response_revision:366; number_of_response:1; }","duration":"154.722362ms","start":"2025-10-18T12:24:39.984073Z","end":"2025-10-18T12:24:40.138795Z","steps":["trace[2073145057] 'process raft request'  (duration: 153.550593ms)"],"step_count":1}
	{"level":"info","ts":"2025-10-18T12:24:40.138990Z","caller":"traceutil/trace.go:172","msg":"trace[460852249] transaction","detail":"{read_only:false; response_revision:367; number_of_response:1; }","duration":"147.204006ms","start":"2025-10-18T12:24:39.991724Z","end":"2025-10-18T12:24:40.138928Z","steps":["trace[460852249] 'process raft request'  (duration: 145.946011ms)"],"step_count":1}
	{"level":"info","ts":"2025-10-18T12:24:40.139208Z","caller":"traceutil/trace.go:172","msg":"trace[1691503075] transaction","detail":"{read_only:false; response_revision:368; number_of_response:1; }","duration":"130.816492ms","start":"2025-10-18T12:24:40.008382Z","end":"2025-10-18T12:24:40.139199Z","steps":["trace[1691503075] 'process raft request'  (duration: 129.325269ms)"],"step_count":1}
	{"level":"info","ts":"2025-10-18T12:24:40.144497Z","caller":"traceutil/trace.go:172","msg":"trace[842550493] transaction","detail":"{read_only:false; response_revision:369; number_of_response:1; }","duration":"135.72185ms","start":"2025-10-18T12:24:40.008758Z","end":"2025-10-18T12:24:40.144480Z","steps":["trace[842550493] 'process raft request'  (duration: 128.981035ms)"],"step_count":1}
	{"level":"warn","ts":"2025-10-18T12:24:40.144822Z","caller":"txn/util.go:93","msg":"apply request took too long","took":"122.354219ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/serviceaccounts/kube-system/replicaset-controller\" limit:1 ","response":"range_response_count:1 size:207"}
	{"level":"info","ts":"2025-10-18T12:24:40.144866Z","caller":"traceutil/trace.go:172","msg":"trace[397740631] range","detail":"{range_begin:/registry/serviceaccounts/kube-system/replicaset-controller; range_end:; response_count:1; response_revision:370; }","duration":"122.41407ms","start":"2025-10-18T12:24:40.022443Z","end":"2025-10-18T12:24:40.144857Z","steps":["trace[397740631] 'agreement among raft nodes before linearized reading'  (duration: 122.2939ms)"],"step_count":1}
	{"level":"info","ts":"2025-10-18T12:25:00.231361Z","caller":"osutil/interrupt_unix.go:65","msg":"received signal; shutting down","signal":"terminated"}
	{"level":"info","ts":"2025-10-18T12:25:00.231451Z","caller":"embed/etcd.go:426","msg":"closing etcd server","name":"default-k8s-diff-port-948988","data-dir":"/var/lib/minikube/etcd","advertise-peer-urls":["https://192.168.61.154:2380"],"advertise-client-urls":["https://192.168.61.154:2379"]}
	{"level":"error","ts":"2025-10-18T12:25:00.231556Z","caller":"embed/etcd.go:912","msg":"setting up serving from embedded etcd failed.","error":"http: Server closed","stacktrace":"go.etcd.io/etcd/server/v3/embed.(*Etcd).errHandler\n\tgo.etcd.io/etcd/server/v3/embed/etcd.go:912\ngo.etcd.io/etcd/server/v3/embed.(*serveCtx).startHandler.func1\n\tgo.etcd.io/etcd/server/v3/embed/serve.go:90"}
	{"level":"error","ts":"2025-10-18T12:25:07.245321Z","caller":"embed/etcd.go:912","msg":"setting up serving from embedded etcd failed.","error":"http: Server closed","stacktrace":"go.etcd.io/etcd/server/v3/embed.(*Etcd).errHandler\n\tgo.etcd.io/etcd/server/v3/embed/etcd.go:912\ngo.etcd.io/etcd/server/v3/embed.(*serveCtx).startHandler.func1\n\tgo.etcd.io/etcd/server/v3/embed/serve.go:90"}
	{"level":"error","ts":"2025-10-18T12:25:07.249128Z","caller":"embed/etcd.go:912","msg":"setting up serving from embedded etcd failed.","error":"accept tcp 127.0.0.1:2381: use of closed network connection","stacktrace":"go.etcd.io/etcd/server/v3/embed.(*Etcd).errHandler\n\tgo.etcd.io/etcd/server/v3/embed/etcd.go:912\ngo.etcd.io/etcd/server/v3/embed.(*Etcd).startHandler.func1\n\tgo.etcd.io/etcd/server/v3/embed/etcd.go:906"}
	{"level":"info","ts":"2025-10-18T12:25:07.249192Z","caller":"etcdserver/server.go:1281","msg":"skipped leadership transfer for single voting member cluster","local-member-id":"3cb84593c3b1392d","current-leader-member-id":"3cb84593c3b1392d"}
	{"level":"info","ts":"2025-10-18T12:25:07.249489Z","caller":"etcdserver/server.go:2342","msg":"server has stopped; stopping storage version's monitor"}
	{"level":"info","ts":"2025-10-18T12:25:07.249534Z","caller":"etcdserver/server.go:2319","msg":"server has stopped; stopping cluster version's monitor"}
	{"level":"warn","ts":"2025-10-18T12:25:07.252745Z","caller":"embed/serve.go:245","msg":"stopping secure grpc server due to error","error":"accept tcp 127.0.0.1:2379: use of closed network connection"}
	{"level":"warn","ts":"2025-10-18T12:25:07.252848Z","caller":"embed/serve.go:247","msg":"stopped secure grpc server due to error","error":"accept tcp 127.0.0.1:2379: use of closed network connection"}
	{"level":"error","ts":"2025-10-18T12:25:07.252863Z","caller":"embed/etcd.go:912","msg":"setting up serving from embedded etcd failed.","error":"accept tcp 127.0.0.1:2379: use of closed network connection","stacktrace":"go.etcd.io/etcd/server/v3/embed.(*Etcd).errHandler\n\tgo.etcd.io/etcd/server/v3/embed/etcd.go:912\ngo.etcd.io/etcd/server/v3/embed.(*Etcd).startHandler.func1\n\tgo.etcd.io/etcd/server/v3/embed/etcd.go:906"}
	{"level":"warn","ts":"2025-10-18T12:25:07.253498Z","caller":"embed/serve.go:245","msg":"stopping secure grpc server due to error","error":"accept tcp 192.168.61.154:2379: use of closed network connection"}
	{"level":"warn","ts":"2025-10-18T12:25:07.253553Z","caller":"embed/serve.go:247","msg":"stopped secure grpc server due to error","error":"accept tcp 192.168.61.154:2379: use of closed network connection"}
	{"level":"error","ts":"2025-10-18T12:25:07.253569Z","caller":"embed/etcd.go:912","msg":"setting up serving from embedded etcd failed.","error":"accept tcp 192.168.61.154:2379: use of closed network connection","stacktrace":"go.etcd.io/etcd/server/v3/embed.(*Etcd).errHandler\n\tgo.etcd.io/etcd/server/v3/embed/etcd.go:912\ngo.etcd.io/etcd/server/v3/embed.(*Etcd).startHandler.func1\n\tgo.etcd.io/etcd/server/v3/embed/etcd.go:906"}
	{"level":"info","ts":"2025-10-18T12:25:07.256384Z","caller":"embed/etcd.go:621","msg":"stopping serving peer traffic","address":"192.168.61.154:2380"}
	{"level":"error","ts":"2025-10-18T12:25:07.256475Z","caller":"embed/etcd.go:912","msg":"setting up serving from embedded etcd failed.","error":"accept tcp 192.168.61.154:2380: use of closed network connection","stacktrace":"go.etcd.io/etcd/server/v3/embed.(*Etcd).errHandler\n\tgo.etcd.io/etcd/server/v3/embed/etcd.go:912\ngo.etcd.io/etcd/server/v3/embed.(*Etcd).startHandler.func1\n\tgo.etcd.io/etcd/server/v3/embed/etcd.go:906"}
	{"level":"info","ts":"2025-10-18T12:25:07.256703Z","caller":"embed/etcd.go:626","msg":"stopped serving peer traffic","address":"192.168.61.154:2380"}
	{"level":"info","ts":"2025-10-18T12:25:07.256718Z","caller":"embed/etcd.go:428","msg":"closed etcd server","name":"default-k8s-diff-port-948988","data-dir":"/var/lib/minikube/etcd","advertise-peer-urls":["https://192.168.61.154:2380"],"advertise-client-urls":["https://192.168.61.154:2379"]}
	
	
	==> kernel <==
	 12:26:51 up 1 min,  0 users,  load average: 2.58, 0.76, 0.27
	Linux default-k8s-diff-port-948988 6.6.95 #1 SMP PREEMPT_DYNAMIC Thu Oct 16 13:22:30 UTC 2025 x86_64 GNU/Linux
	PRETTY_NAME="Buildroot 2025.02"
	
	
	==> kube-apiserver [482f645840fb] <==
	E1018 12:25:43.880029       1 controller.go:102] "Unhandled Error" err=<
		loading OpenAPI spec for "v1beta1.metrics.k8s.io" failed with: failed to download v1beta1.metrics.k8s.io: failed to retrieve openAPI spec, http error: ResponseCode: 503, Body: service unavailable
		, Header: map[Content-Type:[text/plain; charset=utf-8] X-Content-Type-Options:[nosniff]]
	 > logger="UnhandledError"
	I1018 12:25:43.880149       1 controller.go:109] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	I1018 12:25:43.881283       1 controller.go:126] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	I1018 12:25:44.600365       1 controller.go:667] quota admission added evaluator for: deployments.apps
	I1018 12:25:44.665650       1 controller.go:667] quota admission added evaluator for: daemonsets.apps
	I1018 12:25:44.707914       1 controller.go:667] quota admission added evaluator for: roles.rbac.authorization.k8s.io
	I1018 12:25:44.717555       1 controller.go:667] quota admission added evaluator for: rolebindings.rbac.authorization.k8s.io
	I1018 12:25:46.458993       1 controller.go:667] quota admission added evaluator for: endpoints
	I1018 12:25:46.554520       1 controller.go:667] quota admission added evaluator for: endpointslices.discovery.k8s.io
	I1018 12:25:46.699128       1 controller.go:667] quota admission added evaluator for: replicasets.apps
	I1018 12:25:47.509491       1 controller.go:667] quota admission added evaluator for: namespaces
	I1018 12:25:47.794476       1 alloc.go:328] "allocated clusterIPs" service="kubernetes-dashboard/kubernetes-dashboard" clusterIPs={"IPv4":"10.98.100.186"}
	I1018 12:25:47.820795       1 alloc.go:328] "allocated clusterIPs" service="kubernetes-dashboard/dashboard-metrics-scraper" clusterIPs={"IPv4":"10.97.78.66"}
	W1018 12:26:47.665841       1 handler_proxy.go:99] no RequestInfo found in the context
	E1018 12:26:47.666026       1 controller.go:102] "Unhandled Error" err=<
		loading OpenAPI spec for "v1beta1.metrics.k8s.io" failed with: failed to download v1beta1.metrics.k8s.io: failed to retrieve openAPI spec, http error: ResponseCode: 503, Body: service unavailable
		, Header: map[Content-Type:[text/plain; charset=utf-8] X-Content-Type-Options:[nosniff]]
	 > logger="UnhandledError"
	I1018 12:26:47.666042       1 controller.go:109] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	W1018 12:26:47.681677       1 handler_proxy.go:99] no RequestInfo found in the context
	E1018 12:26:47.681971       1 controller.go:113] "Unhandled Error" err="loading OpenAPI spec for \"v1beta1.metrics.k8s.io\" failed with: Error, could not get list of group versions for APIService" logger="UnhandledError"
	I1018 12:26:47.682341       1 controller.go:126] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	
	
	==> kube-apiserver [5dfc625534d2] <==
	W1018 12:25:09.464721       1 logging.go:55] [core] [Channel #95 SubChannel #97]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1018 12:25:09.517443       1 logging.go:55] [core] [Channel #227 SubChannel #229]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1018 12:25:09.620363       1 logging.go:55] [core] [Channel #251 SubChannel #253]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1018 12:25:09.693884       1 logging.go:55] [core] [Channel #155 SubChannel #157]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1018 12:25:09.721047       1 logging.go:55] [core] [Channel #223 SubChannel #225]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1018 12:25:09.726611       1 logging.go:55] [core] [Channel #179 SubChannel #181]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1018 12:25:09.759371       1 logging.go:55] [core] [Channel #107 SubChannel #109]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1018 12:25:09.795061       1 logging.go:55] [core] [Channel #75 SubChannel #77]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1018 12:25:09.819207       1 logging.go:55] [core] [Channel #47 SubChannel #49]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1018 12:25:09.841071       1 logging.go:55] [core] [Channel #171 SubChannel #173]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1018 12:25:09.864445       1 logging.go:55] [core] [Channel #215 SubChannel #217]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1018 12:25:09.896679       1 logging.go:55] [core] [Channel #235 SubChannel #237]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1018 12:25:09.930411       1 logging.go:55] [core] [Channel #123 SubChannel #125]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1018 12:25:09.971423       1 logging.go:55] [core] [Channel #211 SubChannel #213]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1018 12:25:10.017882       1 logging.go:55] [core] [Channel #39 SubChannel #41]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1018 12:25:10.045148       1 logging.go:55] [core] [Channel #67 SubChannel #69]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1018 12:25:10.067233       1 logging.go:55] [core] [Channel #147 SubChannel #149]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1018 12:25:10.127112       1 logging.go:55] [core] [Channel #1 SubChannel #4]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1018 12:25:10.133877       1 logging.go:55] [core] [Channel #131 SubChannel #133]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1018 12:25:10.157359       1 logging.go:55] [core] [Channel #255 SubChannel #257]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1018 12:25:10.165740       1 logging.go:55] [core] [Channel #207 SubChannel #209]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1018 12:25:10.173381       1 logging.go:55] [core] [Channel #247 SubChannel #249]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1018 12:25:10.191257       1 logging.go:55] [core] [Channel #187 SubChannel #189]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1018 12:25:10.254823       1 logging.go:55] [core] [Channel #203 SubChannel #205]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1018 12:25:10.300085       1 logging.go:55] [core] [Channel #55 SubChannel #57]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	
	
	==> kube-controller-manager [07dc691cd2b4] <==
	I1018 12:24:37.212816       1 shared_informer.go:356] "Caches are synced" controller="certificate-csrsigning-kube-apiserver-client"
	I1018 12:24:37.213552       1 node_lifecycle_controller.go:1067] "Controller detected that zone is now in new state" logger="node-lifecycle-controller" zone="" newState="Normal"
	I1018 12:24:37.214863       1 shared_informer.go:356] "Caches are synced" controller="resource_claim"
	I1018 12:24:37.215195       1 shared_informer.go:356] "Caches are synced" controller="service account"
	I1018 12:24:37.215506       1 shared_informer.go:356] "Caches are synced" controller="certificate-csrsigning-kubelet-client"
	I1018 12:24:37.215712       1 shared_informer.go:356] "Caches are synced" controller="certificate-csrsigning-legacy-unknown"
	I1018 12:24:37.215992       1 shared_informer.go:356] "Caches are synced" controller="PV protection"
	I1018 12:24:37.216210       1 shared_informer.go:356] "Caches are synced" controller="endpoint_slice"
	I1018 12:24:37.216297       1 shared_informer.go:356] "Caches are synced" controller="ReplicationController"
	I1018 12:24:37.220772       1 shared_informer.go:356] "Caches are synced" controller="resource quota"
	I1018 12:24:37.221277       1 shared_informer.go:356] "Caches are synced" controller="disruption"
	I1018 12:24:37.229865       1 range_allocator.go:428] "Set node PodCIDR" logger="node-ipam-controller" node="default-k8s-diff-port-948988" podCIDRs=["10.244.0.0/24"]
	I1018 12:24:37.230483       1 shared_informer.go:356] "Caches are synced" controller="legacy-service-account-token-cleaner"
	I1018 12:24:37.235336       1 shared_informer.go:356] "Caches are synced" controller="ClusterRoleAggregator"
	I1018 12:24:37.236208       1 shared_informer.go:356] "Caches are synced" controller="deployment"
	I1018 12:24:37.243773       1 shared_informer.go:356] "Caches are synced" controller="ReplicaSet"
	I1018 12:24:37.261496       1 shared_informer.go:356] "Caches are synced" controller="endpoint_slice_mirroring"
	I1018 12:24:37.262756       1 shared_informer.go:356] "Caches are synced" controller="service-cidr-controller"
	I1018 12:24:37.263515       1 shared_informer.go:356] "Caches are synced" controller="stateful set"
	I1018 12:24:37.263680       1 shared_informer.go:356] "Caches are synced" controller="attach detach"
	I1018 12:24:37.332884       1 shared_informer.go:349] "Waiting for caches to sync" controller="garbage collector"
	I1018 12:24:37.408817       1 shared_informer.go:356] "Caches are synced" controller="garbage collector"
	I1018 12:24:37.409172       1 garbagecollector.go:154] "Garbage collector: all resource monitors have synced" logger="garbage-collector-controller"
	I1018 12:24:37.409412       1 garbagecollector.go:157] "Proceeding to collect garbage" logger="garbage-collector-controller"
	I1018 12:24:37.433850       1 shared_informer.go:356] "Caches are synced" controller="garbage collector"
	
	
	==> kube-controller-manager [cbcb65b91df5] <==
	I1018 12:25:46.326514       1 shared_informer.go:356] "Caches are synced" controller="legacy-service-account-token-cleaner"
	I1018 12:25:46.330568       1 shared_informer.go:356] "Caches are synced" controller="taint-eviction-controller"
	I1018 12:25:46.338200       1 shared_informer.go:356] "Caches are synced" controller="ReplicaSet"
	I1018 12:25:46.354827       1 shared_informer.go:356] "Caches are synced" controller="disruption"
	I1018 12:25:46.354933       1 shared_informer.go:356] "Caches are synced" controller="attach detach"
	I1018 12:25:46.358135       1 shared_informer.go:356] "Caches are synced" controller="garbage collector"
	I1018 12:25:46.358166       1 garbagecollector.go:154] "Garbage collector: all resource monitors have synced" logger="garbage-collector-controller"
	I1018 12:25:46.358174       1 garbagecollector.go:157] "Proceeding to collect garbage" logger="garbage-collector-controller"
	I1018 12:25:46.361699       1 shared_informer.go:356] "Caches are synced" controller="taint"
	I1018 12:25:46.362331       1 node_lifecycle_controller.go:1221] "Initializing eviction metric for zone" logger="node-lifecycle-controller" zone=""
	I1018 12:25:46.362518       1 node_lifecycle_controller.go:873] "Missing timestamp for Node. Assuming now as a timestamp" logger="node-lifecycle-controller" node="default-k8s-diff-port-948988"
	I1018 12:25:46.362582       1 node_lifecycle_controller.go:1025] "Controller detected that all Nodes are not-Ready. Entering master disruption mode" logger="node-lifecycle-controller"
	I1018 12:25:46.362715       1 shared_informer.go:356] "Caches are synced" controller="ephemeral"
	I1018 12:25:46.364998       1 shared_informer.go:356] "Caches are synced" controller="GC"
	I1018 12:25:46.397419       1 shared_informer.go:356] "Caches are synced" controller="garbage collector"
	E1018 12:25:47.622164       1 replica_set.go:587] "Unhandled Error" err="sync \"kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9\" failed with pods \"dashboard-metrics-scraper-6ffb444bf9-\" is forbidden: error looking up service account kubernetes-dashboard/kubernetes-dashboard: serviceaccount \"kubernetes-dashboard\" not found" logger="UnhandledError"
	E1018 12:25:47.637442       1 replica_set.go:587] "Unhandled Error" err="sync \"kubernetes-dashboard/kubernetes-dashboard-855c9754f9\" failed with pods \"kubernetes-dashboard-855c9754f9-\" is forbidden: error looking up service account kubernetes-dashboard/kubernetes-dashboard: serviceaccount \"kubernetes-dashboard\" not found" logger="UnhandledError"
	E1018 12:25:47.640602       1 replica_set.go:587] "Unhandled Error" err="sync \"kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9\" failed with pods \"dashboard-metrics-scraper-6ffb444bf9-\" is forbidden: error looking up service account kubernetes-dashboard/kubernetes-dashboard: serviceaccount \"kubernetes-dashboard\" not found" logger="UnhandledError"
	E1018 12:25:47.654283       1 replica_set.go:587] "Unhandled Error" err="sync \"kubernetes-dashboard/kubernetes-dashboard-855c9754f9\" failed with pods \"kubernetes-dashboard-855c9754f9-\" is forbidden: error looking up service account kubernetes-dashboard/kubernetes-dashboard: serviceaccount \"kubernetes-dashboard\" not found" logger="UnhandledError"
	E1018 12:25:47.654837       1 replica_set.go:587] "Unhandled Error" err="sync \"kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9\" failed with pods \"dashboard-metrics-scraper-6ffb444bf9-\" is forbidden: error looking up service account kubernetes-dashboard/kubernetes-dashboard: serviceaccount \"kubernetes-dashboard\" not found" logger="UnhandledError"
	E1018 12:25:47.670862       1 replica_set.go:587] "Unhandled Error" err="sync \"kubernetes-dashboard/kubernetes-dashboard-855c9754f9\" failed with pods \"kubernetes-dashboard-855c9754f9-\" is forbidden: error looking up service account kubernetes-dashboard/kubernetes-dashboard: serviceaccount \"kubernetes-dashboard\" not found" logger="UnhandledError"
	E1018 12:25:47.673502       1 replica_set.go:587] "Unhandled Error" err="sync \"kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9\" failed with pods \"dashboard-metrics-scraper-6ffb444bf9-\" is forbidden: error looking up service account kubernetes-dashboard/kubernetes-dashboard: serviceaccount \"kubernetes-dashboard\" not found" logger="UnhandledError"
	I1018 12:25:56.364778       1 node_lifecycle_controller.go:1044] "Controller detected that some Nodes are Ready. Exiting master disruption mode" logger="node-lifecycle-controller"
	E1018 12:26:47.748771       1 resource_quota_controller.go:446] "Unhandled Error" err="unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1" logger="UnhandledError"
	I1018 12:26:47.764048       1 garbagecollector.go:787] "failed to discover some groups" logger="garbage-collector-controller" groups="map[\"metrics.k8s.io/v1beta1\":\"stale GroupVersion discovery: metrics.k8s.io/v1beta1\"]"
	
	
	==> kube-proxy [72d0dd1b3e6d] <==
	I1018 12:24:41.564008       1 shared_informer.go:349] "Waiting for caches to sync" controller="node informer cache"
	I1018 12:24:41.664708       1 shared_informer.go:356] "Caches are synced" controller="node informer cache"
	I1018 12:24:41.664884       1 server.go:219] "Successfully retrieved NodeIPs" NodeIPs=["192.168.61.154"]
	E1018 12:24:41.665067       1 server.go:256] "Kube-proxy configuration may be incomplete or incorrect" err="nodePortAddresses is unset; NodePort connections will be accepted on all local IPs. Consider using `--nodeport-addresses primary`"
	I1018 12:24:41.766806       1 server_linux.go:103] "No iptables support for family" ipFamily="IPv6" error=<
		error listing chain "POSTROUTING" in table "nat": exit status 3: ip6tables v1.8.9 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
		Perhaps ip6tables or your kernel needs to be upgraded.
	 >
	I1018 12:24:41.766902       1 server.go:267] "kube-proxy running in single-stack mode" ipFamily="IPv4"
	I1018 12:24:41.767037       1 server_linux.go:132] "Using iptables Proxier"
	I1018 12:24:41.808707       1 proxier.go:242] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses" ipFamily="IPv4"
	I1018 12:24:41.810126       1 server.go:527] "Version info" version="v1.34.1"
	I1018 12:24:41.810170       1 server.go:529] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1018 12:24:41.819567       1 config.go:200] "Starting service config controller"
	I1018 12:24:41.819614       1 shared_informer.go:349] "Waiting for caches to sync" controller="service config"
	I1018 12:24:41.819656       1 config.go:106] "Starting endpoint slice config controller"
	I1018 12:24:41.819662       1 shared_informer.go:349] "Waiting for caches to sync" controller="endpoint slice config"
	I1018 12:24:41.819679       1 config.go:403] "Starting serviceCIDR config controller"
	I1018 12:24:41.819685       1 shared_informer.go:349] "Waiting for caches to sync" controller="serviceCIDR config"
	I1018 12:24:41.834904       1 config.go:309] "Starting node config controller"
	I1018 12:24:41.835028       1 shared_informer.go:349] "Waiting for caches to sync" controller="node config"
	I1018 12:24:41.835056       1 shared_informer.go:356] "Caches are synced" controller="node config"
	I1018 12:24:41.927064       1 shared_informer.go:356] "Caches are synced" controller="serviceCIDR config"
	I1018 12:24:41.927258       1 shared_informer.go:356] "Caches are synced" controller="service config"
	I1018 12:24:41.927530       1 shared_informer.go:356] "Caches are synced" controller="endpoint slice config"
	
	
	==> kube-proxy [e74b601e6b20] <==
	I1018 12:25:45.811654       1 shared_informer.go:349] "Waiting for caches to sync" controller="node informer cache"
	I1018 12:25:45.913019       1 shared_informer.go:356] "Caches are synced" controller="node informer cache"
	I1018 12:25:45.913130       1 server.go:219] "Successfully retrieved NodeIPs" NodeIPs=["192.168.61.154"]
	E1018 12:25:45.913538       1 server.go:256] "Kube-proxy configuration may be incomplete or incorrect" err="nodePortAddresses is unset; NodePort connections will be accepted on all local IPs. Consider using `--nodeport-addresses primary`"
	I1018 12:25:46.627631       1 server_linux.go:103] "No iptables support for family" ipFamily="IPv6" error=<
		error listing chain "POSTROUTING" in table "nat": exit status 3: ip6tables v1.8.9 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
		Perhaps ip6tables or your kernel needs to be upgraded.
	 >
	I1018 12:25:46.627729       1 server.go:267] "kube-proxy running in single-stack mode" ipFamily="IPv4"
	I1018 12:25:46.627769       1 server_linux.go:132] "Using iptables Proxier"
	I1018 12:25:46.729383       1 proxier.go:242] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses" ipFamily="IPv4"
	I1018 12:25:46.742257       1 server.go:527] "Version info" version="v1.34.1"
	I1018 12:25:46.742299       1 server.go:529] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1018 12:25:46.769189       1 config.go:309] "Starting node config controller"
	I1018 12:25:46.769207       1 shared_informer.go:349] "Waiting for caches to sync" controller="node config"
	I1018 12:25:46.769215       1 shared_informer.go:356] "Caches are synced" controller="node config"
	I1018 12:25:46.772876       1 config.go:403] "Starting serviceCIDR config controller"
	I1018 12:25:46.772985       1 shared_informer.go:349] "Waiting for caches to sync" controller="serviceCIDR config"
	I1018 12:25:46.773282       1 config.go:200] "Starting service config controller"
	I1018 12:25:46.773361       1 shared_informer.go:349] "Waiting for caches to sync" controller="service config"
	I1018 12:25:46.773393       1 config.go:106] "Starting endpoint slice config controller"
	I1018 12:25:46.773398       1 shared_informer.go:349] "Waiting for caches to sync" controller="endpoint slice config"
	I1018 12:25:46.874997       1 shared_informer.go:356] "Caches are synced" controller="endpoint slice config"
	I1018 12:25:46.875472       1 shared_informer.go:356] "Caches are synced" controller="serviceCIDR config"
	I1018 12:25:46.875491       1 shared_informer.go:356] "Caches are synced" controller="service config"
	
	
	==> kube-scheduler [aa45133c5292] <==
	I1018 12:25:40.892121       1 serving.go:386] Generated self-signed cert in-memory
	W1018 12:25:42.779818       1 requestheader_controller.go:204] Unable to get configmap/extension-apiserver-authentication in kube-system.  Usually fixed by 'kubectl create rolebinding -n kube-system ROLEBINDING_NAME --role=extension-apiserver-authentication-reader --serviceaccount=YOUR_NS:YOUR_SA'
	W1018 12:25:42.779913       1 authentication.go:397] Error looking up in-cluster authentication configuration: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot get resource "configmaps" in API group "" in the namespace "kube-system"
	W1018 12:25:42.779937       1 authentication.go:398] Continuing without authentication configuration. This may treat all requests as anonymous.
	W1018 12:25:42.779952       1 authentication.go:399] To require authentication configuration lookup to succeed, set --authentication-tolerate-lookup-failure=false
	I1018 12:25:42.837530       1 server.go:175] "Starting Kubernetes Scheduler" version="v1.34.1"
	I1018 12:25:42.837672       1 server.go:177] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1018 12:25:42.850332       1 configmap_cafile_content.go:205] "Starting controller" name="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I1018 12:25:42.850953       1 secure_serving.go:211] Serving securely on 127.0.0.1:10259
	I1018 12:25:42.851127       1 shared_informer.go:349] "Waiting for caches to sync" controller="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I1018 12:25:42.851921       1 tlsconfig.go:243] "Starting DynamicServingCertificateController"
	I1018 12:25:42.953076       1 shared_informer.go:356] "Caches are synced" controller="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	
	
	==> kube-scheduler [ac171ed99aa7] <==
	E1018 12:24:29.521551       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Namespace: namespaces is forbidden: User \"system:kube-scheduler\" cannot list resource \"namespaces\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Namespace"
	E1018 12:24:29.521602       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User \"system:kube-scheduler\" cannot list resource \"poddisruptionbudgets\" in API group \"policy\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.PodDisruptionBudget"
	E1018 12:24:29.521714       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User \"system:kube-scheduler\" cannot list resource \"replicasets\" in API group \"apps\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ReplicaSet"
	E1018 12:24:29.521771       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Service: services is forbidden: User \"system:kube-scheduler\" cannot list resource \"services\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Service"
	E1018 12:24:29.521820       1 reflector.go:205] "Failed to watch" err="failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User \"system:kube-scheduler\" cannot list resource \"statefulsets\" in API group \"apps\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.StatefulSet"
	E1018 12:24:30.388364       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ConfigMap: configmaps \"extension-apiserver-authentication\" is forbidden: User \"system:kube-scheduler\" cannot list resource \"configmaps\" in API group \"\" in the namespace \"kube-system\"" logger="UnhandledError" reflector="runtime/asm_amd64.s:1700" type="*v1.ConfigMap"
	E1018 12:24:30.423548       1 reflector.go:205] "Failed to watch" err="failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csinodes\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSINode"
	E1018 12:24:30.458398       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User \"system:kube-scheduler\" cannot list resource \"replicationcontrollers\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ReplicationController"
	E1018 12:24:30.471430       1 reflector.go:205] "Failed to watch" err="failed to list *v1.DeviceClass: deviceclasses.resource.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"deviceclasses\" in API group \"resource.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.DeviceClass"
	E1018 12:24:30.482651       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ResourceSlice: resourceslices.resource.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"resourceslices\" in API group \"resource.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ResourceSlice"
	E1018 12:24:30.502659       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Namespace: namespaces is forbidden: User \"system:kube-scheduler\" cannot list resource \"namespaces\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Namespace"
	E1018 12:24:30.602254       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User \"system:kube-scheduler\" cannot list resource \"persistentvolumes\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.PersistentVolume"
	E1018 12:24:30.613712       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ResourceClaim: resourceclaims.resource.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"resourceclaims\" in API group \"resource.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ResourceClaim"
	E1018 12:24:30.623631       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Service: services is forbidden: User \"system:kube-scheduler\" cannot list resource \"services\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Service"
	E1018 12:24:30.752533       1 reflector.go:205] "Failed to watch" err="failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csidrivers\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIDriver"
	E1018 12:24:30.774425       1 reflector.go:205] "Failed to watch" err="failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csistoragecapacities\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIStorageCapacity"
	E1018 12:24:30.882034       1 reflector.go:205] "Failed to watch" err="failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"storageclasses\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.StorageClass"
	E1018 12:24:30.922203       1 reflector.go:205] "Failed to watch" err="failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User \"system:kube-scheduler\" cannot list resource \"statefulsets\" in API group \"apps\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.StatefulSet"
	I1018 12:24:32.510730       1 shared_informer.go:356] "Caches are synced" controller="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I1018 12:25:00.227081       1 tlsconfig.go:258] "Shutting down DynamicServingCertificateController"
	I1018 12:25:00.227204       1 configmap_cafile_content.go:226] "Shutting down controller" name="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I1018 12:25:00.227889       1 secure_serving.go:259] Stopped listening on 127.0.0.1:10259
	I1018 12:25:00.228116       1 server.go:263] "[graceful-termination] secure server has stopped listening"
	I1018 12:25:00.228207       1 server.go:265] "[graceful-termination] secure server is exiting"
	E1018 12:25:00.228229       1 run.go:72] "command failed" err="finished without leader elect"
	
	
	==> kubelet <==
	Oct 18 12:26:48 default-k8s-diff-port-948988 kubelet[4182]: I1018 12:26:48.808146    4182 kubelet.go:3219] "Creating a mirror pod for static pod" pod="kube-system/etcd-default-k8s-diff-port-948988"
	Oct 18 12:26:48 default-k8s-diff-port-948988 kubelet[4182]: E1018 12:26:48.818965    4182 kubelet.go:3221] "Failed creating a mirror pod" err="pods \"etcd-default-k8s-diff-port-948988\" already exists" pod="kube-system/etcd-default-k8s-diff-port-948988"
	Oct 18 12:26:49 default-k8s-diff-port-948988 kubelet[4182]: I1018 12:26:49.224325    4182 apiserver.go:52] "Watching apiserver"
	Oct 18 12:26:49 default-k8s-diff-port-948988 kubelet[4182]: I1018 12:26:49.299725    4182 desired_state_of_world_populator.go:154] "Finished populating initial desired state of world"
	Oct 18 12:26:49 default-k8s-diff-port-948988 kubelet[4182]: I1018 12:26:49.334900    4182 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tmp\" (UniqueName: \"kubernetes.io/host-path/a2da4bd7-fb36-44bc-9e08-4ccbe934a19a-tmp\") pod \"storage-provisioner\" (UID: \"a2da4bd7-fb36-44bc-9e08-4ccbe934a19a\") " pod="kube-system/storage-provisioner"
	Oct 18 12:26:49 default-k8s-diff-port-948988 kubelet[4182]: I1018 12:26:49.335035    4182 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/6dd74255-86cf-46b6-a050-2d1ec343837e-xtables-lock\") pod \"kube-proxy-hmf6q\" (UID: \"6dd74255-86cf-46b6-a050-2d1ec343837e\") " pod="kube-system/kube-proxy-hmf6q"
	Oct 18 12:26:49 default-k8s-diff-port-948988 kubelet[4182]: I1018 12:26:49.335064    4182 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/6dd74255-86cf-46b6-a050-2d1ec343837e-lib-modules\") pod \"kube-proxy-hmf6q\" (UID: \"6dd74255-86cf-46b6-a050-2d1ec343837e\") " pod="kube-system/kube-proxy-hmf6q"
	Oct 18 12:26:49 default-k8s-diff-port-948988 kubelet[4182]: I1018 12:26:49.559117    4182 scope.go:117] "RemoveContainer" containerID="28ffefdfcaefaa0dcc5a6077bf470cdb9475d6e21b7a7d96be86de74a8777734"
	Oct 18 12:26:49 default-k8s-diff-port-948988 kubelet[4182]: E1018 12:26:49.584832    4182 log.go:32] "PullImage from image service failed" err="rpc error: code = Unknown desc = Error response from daemon: Get \"https://fake.domain/v2/\": dial tcp: lookup fake.domain on 192.168.122.1:53: no such host" image="fake.domain/registry.k8s.io/echoserver:1.4"
	Oct 18 12:26:49 default-k8s-diff-port-948988 kubelet[4182]: E1018 12:26:49.584904    4182 kuberuntime_image.go:43] "Failed to pull image" err="Error response from daemon: Get \"https://fake.domain/v2/\": dial tcp: lookup fake.domain on 192.168.122.1:53: no such host" image="fake.domain/registry.k8s.io/echoserver:1.4"
	Oct 18 12:26:49 default-k8s-diff-port-948988 kubelet[4182]: E1018 12:26:49.585150    4182 kuberuntime_manager.go:1449] "Unhandled Error" err="container metrics-server start failed in pod metrics-server-746fcd58dc-7788d_kube-system(482bf974-0dde-4e8e-abde-4c6a50f08c8d): ErrImagePull: Error response from daemon: Get \"https://fake.domain/v2/\": dial tcp: lookup fake.domain on 192.168.122.1:53: no such host" logger="UnhandledError"
	Oct 18 12:26:49 default-k8s-diff-port-948988 kubelet[4182]: E1018 12:26:49.585190    4182 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ErrImagePull: \"Error response from daemon: Get \\\"https://fake.domain/v2/\\\": dial tcp: lookup fake.domain on 192.168.122.1:53: no such host\"" pod="kube-system/metrics-server-746fcd58dc-7788d" podUID="482bf974-0dde-4e8e-abde-4c6a50f08c8d"
	Oct 18 12:26:49 default-k8s-diff-port-948988 kubelet[4182]: I1018 12:26:49.834067    4182 kubelet.go:3219] "Creating a mirror pod for static pod" pod="kube-system/kube-apiserver-default-k8s-diff-port-948988"
	Oct 18 12:26:49 default-k8s-diff-port-948988 kubelet[4182]: I1018 12:26:49.834883    4182 kubelet.go:3219] "Creating a mirror pod for static pod" pod="kube-system/etcd-default-k8s-diff-port-948988"
	Oct 18 12:26:49 default-k8s-diff-port-948988 kubelet[4182]: I1018 12:26:49.835048    4182 kubelet.go:3219] "Creating a mirror pod for static pod" pod="kube-system/kube-controller-manager-default-k8s-diff-port-948988"
	Oct 18 12:26:49 default-k8s-diff-port-948988 kubelet[4182]: I1018 12:26:49.835180    4182 kubelet.go:3219] "Creating a mirror pod for static pod" pod="kube-system/kube-scheduler-default-k8s-diff-port-948988"
	Oct 18 12:26:49 default-k8s-diff-port-948988 kubelet[4182]: I1018 12:26:49.835659    4182 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="26d8ce1671b6d868f5c427741052d8ba6bc2581e713fc06671798cbeaa0e2467"
	Oct 18 12:26:50 default-k8s-diff-port-948988 kubelet[4182]: E1018 12:26:50.457040    4182 kubelet.go:3221] "Failed creating a mirror pod" err="pods \"kube-scheduler-default-k8s-diff-port-948988\" already exists" pod="kube-system/kube-scheduler-default-k8s-diff-port-948988"
	Oct 18 12:26:50 default-k8s-diff-port-948988 kubelet[4182]: E1018 12:26:50.473284    4182 kubelet.go:3221] "Failed creating a mirror pod" err="pods \"kube-apiserver-default-k8s-diff-port-948988\" already exists" pod="kube-system/kube-apiserver-default-k8s-diff-port-948988"
	Oct 18 12:26:50 default-k8s-diff-port-948988 kubelet[4182]: E1018 12:26:50.474210    4182 kubelet.go:3221] "Failed creating a mirror pod" err="pods \"kube-controller-manager-default-k8s-diff-port-948988\" already exists" pod="kube-system/kube-controller-manager-default-k8s-diff-port-948988"
	Oct 18 12:26:50 default-k8s-diff-port-948988 kubelet[4182]: E1018 12:26:50.475377    4182 kubelet.go:3221] "Failed creating a mirror pod" err="pods \"etcd-default-k8s-diff-port-948988\" already exists" pod="kube-system/etcd-default-k8s-diff-port-948988"
	Oct 18 12:26:50 default-k8s-diff-port-948988 kubelet[4182]: E1018 12:26:50.587059    4182 log.go:32] "PullImage from image service failed" err="rpc error: code = Unknown desc = Docker Image Format v1 and Docker Image manifest version 2, schema 1 support has been removed. Suggest the author of registry.k8s.io/echoserver:1.4 to upgrade the image to the OCI Format or Docker Image manifest v2, schema 2. More information at https://docs.docker.com/go/deprecated-image-specs/" image="registry.k8s.io/echoserver:1.4"
	Oct 18 12:26:50 default-k8s-diff-port-948988 kubelet[4182]: E1018 12:26:50.587186    4182 kuberuntime_image.go:43] "Failed to pull image" err="Docker Image Format v1 and Docker Image manifest version 2, schema 1 support has been removed. Suggest the author of registry.k8s.io/echoserver:1.4 to upgrade the image to the OCI Format or Docker Image manifest v2, schema 2. More information at https://docs.docker.com/go/deprecated-image-specs/" image="registry.k8s.io/echoserver:1.4"
	Oct 18 12:26:50 default-k8s-diff-port-948988 kubelet[4182]: E1018 12:26:50.587563    4182 kuberuntime_manager.go:1449] "Unhandled Error" err="container dashboard-metrics-scraper start failed in pod dashboard-metrics-scraper-6ffb444bf9-gxs6s_kubernetes-dashboard(d9f0a621-1105-44d9-97ff-6ab18a09af31): ErrImagePull: Docker Image Format v1 and Docker Image manifest version 2, schema 1 support has been removed. Suggest the author of registry.k8s.io/echoserver:1.4 to upgrade the image to the OCI Format or Docker Image manifest v2, schema 2. More information at https://docs.docker.com/go/deprecated-image-specs/" logger="UnhandledError"
	Oct 18 12:26:50 default-k8s-diff-port-948988 kubelet[4182]: E1018 12:26:50.587744    4182 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with ErrImagePull: \"Docker Image Format v1 and Docker Image manifest version 2, schema 1 support has been removed. Suggest the author of registry.k8s.io/echoserver:1.4 to upgrade the image to the OCI Format or Docker Image manifest v2, schema 2. More information at https://docs.docker.com/go/deprecated-image-specs/\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-gxs6s" podUID="d9f0a621-1105-44d9-97ff-6ab18a09af31"
	
	
	==> kubernetes-dashboard [3a2c1a468e77] <==
	2025/10/18 12:26:02 Starting overwatch
	2025/10/18 12:26:02 Using namespace: kubernetes-dashboard
	2025/10/18 12:26:02 Using in-cluster config to connect to apiserver
	2025/10/18 12:26:02 Using secret token for csrf signing
	2025/10/18 12:26:02 Initializing csrf token from kubernetes-dashboard-csrf secret
	2025/10/18 12:26:02 Empty token. Generating and storing in a secret kubernetes-dashboard-csrf
	2025/10/18 12:26:02 Successful initial request to the apiserver, version: v1.34.1
	2025/10/18 12:26:02 Generating JWE encryption key
	2025/10/18 12:26:02 New synchronizer has been registered: kubernetes-dashboard-key-holder-kubernetes-dashboard. Starting
	2025/10/18 12:26:02 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kubernetes-dashboard
	2025/10/18 12:26:02 Initializing JWE encryption key from synchronized object
	2025/10/18 12:26:02 Creating in-cluster Sidecar client
	2025/10/18 12:26:02 Metric client health check failed: the server is currently unable to handle the request (get services dashboard-metrics-scraper). Retrying in 30 seconds.
	2025/10/18 12:26:02 Serving insecurely on HTTP port: 9090
	2025/10/18 12:26:47 Metric client health check failed: the server is currently unable to handle the request (get services dashboard-metrics-scraper). Retrying in 30 seconds.
	
	
	==> storage-provisioner [28ffefdfcaef] <==
	I1018 12:25:44.727571       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	F1018 12:26:14.742942       1 main.go:39] error getting server version: Get "https://10.96.0.1:443/version?timeout=32s": dial tcp 10.96.0.1:443: i/o timeout
	

                                                
                                                
-- /stdout --
helpers_test.go:262: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p default-k8s-diff-port-948988 -n default-k8s-diff-port-948988
helpers_test.go:269: (dbg) Run:  kubectl --context default-k8s-diff-port-948988 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:280: non-running pods: metrics-server-746fcd58dc-7788d dashboard-metrics-scraper-6ffb444bf9-gxs6s
helpers_test.go:282: ======> post-mortem[TestStartStop/group/default-k8s-diff-port/serial/Pause]: describe non-running pods <======
helpers_test.go:285: (dbg) Run:  kubectl --context default-k8s-diff-port-948988 describe pod metrics-server-746fcd58dc-7788d dashboard-metrics-scraper-6ffb444bf9-gxs6s
helpers_test.go:285: (dbg) Non-zero exit: kubectl --context default-k8s-diff-port-948988 describe pod metrics-server-746fcd58dc-7788d dashboard-metrics-scraper-6ffb444bf9-gxs6s: exit status 1 (88.809101ms)

                                                
                                                
** stderr ** 
	Error from server (NotFound): pods "metrics-server-746fcd58dc-7788d" not found
	Error from server (NotFound): pods "dashboard-metrics-scraper-6ffb444bf9-gxs6s" not found

                                                
                                                
** /stderr **
helpers_test.go:287: kubectl --context default-k8s-diff-port-948988 describe pod metrics-server-746fcd58dc-7788d dashboard-metrics-scraper-6ffb444bf9-gxs6s: exit status 1
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:223: ======>  post-mortem[TestStartStop/group/default-k8s-diff-port/serial/Pause]: network settings <======
helpers_test.go:230: HOST ENV snapshots: PROXY env: HTTP_PROXY="<empty>" HTTPS_PROXY="<empty>" NO_PROXY="<empty>"
helpers_test.go:247: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p default-k8s-diff-port-948988 -n default-k8s-diff-port-948988
helpers_test.go:252: <<< TestStartStop/group/default-k8s-diff-port/serial/Pause FAILED: start of post-mortem logs <<<
helpers_test.go:253: ======>  post-mortem[TestStartStop/group/default-k8s-diff-port/serial/Pause]: minikube logs <======
helpers_test.go:255: (dbg) Run:  out/minikube-linux-amd64 -p default-k8s-diff-port-948988 logs -n 25
helpers_test.go:255: (dbg) Done: out/minikube-linux-amd64 -p default-k8s-diff-port-948988 logs -n 25: (1.453478465s)
helpers_test.go:260: TestStartStop/group/default-k8s-diff-port/serial/Pause logs: 
-- stdout --
	
	==> Audit <==
	┌─────────┬────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┬──────────────────────────────┬─────────┬─────────┬─────────────────────┬─────────────
────────┐
	│ COMMAND │                                                                                                                      ARGS                                                                                                                      │           PROFILE            │  USER   │ VERSION │     START TIME      │      END TIME       │
	├─────────┼────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┼──────────────────────────────┼─────────┼─────────┼─────────────────────┼─────────────
────────┤
	│ stop    │ -p default-k8s-diff-port-948988 --alsologtostderr -v=3                                                                                                                                                                                         │ default-k8s-diff-port-948988 │ jenkins │ v1.37.0 │ 18 Oct 25 12:24 UTC │ 18 Oct 25 12:25 UTC │
	│ addons  │ enable metrics-server -p embed-certs-270191 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain                                                                                                       │ embed-certs-270191           │ jenkins │ v1.37.0 │ 18 Oct 25 12:25 UTC │ 18 Oct 25 12:25 UTC │
	│ addons  │ enable metrics-server -p newest-cni-661287 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain                                                                                                        │ newest-cni-661287            │ jenkins │ v1.37.0 │ 18 Oct 25 12:25 UTC │ 18 Oct 25 12:25 UTC │
	│ stop    │ -p embed-certs-270191 --alsologtostderr -v=3                                                                                                                                                                                                   │ embed-certs-270191           │ jenkins │ v1.37.0 │ 18 Oct 25 12:25 UTC │ 18 Oct 25 12:25 UTC │
	│ stop    │ -p newest-cni-661287 --alsologtostderr -v=3                                                                                                                                                                                                    │ newest-cni-661287            │ jenkins │ v1.37.0 │ 18 Oct 25 12:25 UTC │ 18 Oct 25 12:25 UTC │
	│ addons  │ enable dashboard -p default-k8s-diff-port-948988 --images=MetricsScraper=registry.k8s.io/echoserver:1.4                                                                                                                                        │ default-k8s-diff-port-948988 │ jenkins │ v1.37.0 │ 18 Oct 25 12:25 UTC │ 18 Oct 25 12:25 UTC │
	│ start   │ -p default-k8s-diff-port-948988 --memory=3072 --alsologtostderr --wait=true --apiserver-port=8444 --driver=kvm2  --auto-update-drivers=false --kubernetes-version=v1.34.1                                                                      │ default-k8s-diff-port-948988 │ jenkins │ v1.37.0 │ 18 Oct 25 12:25 UTC │ 18 Oct 25 12:25 UTC │
	│ start   │ -p embed-certs-270191 --memory=3072 --alsologtostderr --wait=true --embed-certs --driver=kvm2  --auto-update-drivers=false --kubernetes-version=v1.34.1                                                                                        │ embed-certs-270191           │ jenkins │ v1.37.0 │ 18 Oct 25 12:25 UTC │ 18 Oct 25 12:26 UTC │
	│ addons  │ enable dashboard -p newest-cni-661287 --images=MetricsScraper=registry.k8s.io/echoserver:1.4                                                                                                                                                   │ newest-cni-661287            │ jenkins │ v1.37.0 │ 18 Oct 25 12:25 UTC │ 18 Oct 25 12:25 UTC │
	│ start   │ -p newest-cni-661287 --memory=3072 --alsologtostderr --wait=apiserver,system_pods,default_sa --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=kvm2  --auto-update-drivers=false --kubernetes-version=v1.34.1 │ newest-cni-661287            │ jenkins │ v1.37.0 │ 18 Oct 25 12:25 UTC │                     │
	│ image   │ no-preload-839073 image list --format=json                                                                                                                                                                                                     │ no-preload-839073            │ jenkins │ v1.37.0 │ 18 Oct 25 12:25 UTC │ 18 Oct 25 12:25 UTC │
	│ pause   │ -p no-preload-839073 --alsologtostderr -v=1                                                                                                                                                                                                    │ no-preload-839073            │ jenkins │ v1.37.0 │ 18 Oct 25 12:25 UTC │ 18 Oct 25 12:25 UTC │
	│ unpause │ -p no-preload-839073 --alsologtostderr -v=1                                                                                                                                                                                                    │ no-preload-839073            │ jenkins │ v1.37.0 │ 18 Oct 25 12:25 UTC │ 18 Oct 25 12:25 UTC │
	│ delete  │ -p no-preload-839073                                                                                                                                                                                                                           │ no-preload-839073            │ jenkins │ v1.37.0 │ 18 Oct 25 12:25 UTC │ 18 Oct 25 12:25 UTC │
	│ delete  │ -p no-preload-839073                                                                                                                                                                                                                           │ no-preload-839073            │ jenkins │ v1.37.0 │ 18 Oct 25 12:25 UTC │ 18 Oct 25 12:25 UTC │
	│ start   │ -p auto-720125 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --driver=kvm2  --auto-update-drivers=false                                                                                                                       │ auto-720125                  │ jenkins │ v1.37.0 │ 18 Oct 25 12:25 UTC │                     │
	│ image   │ default-k8s-diff-port-948988 image list --format=json                                                                                                                                                                                          │ default-k8s-diff-port-948988 │ jenkins │ v1.37.0 │ 18 Oct 25 12:26 UTC │ 18 Oct 25 12:26 UTC │
	│ pause   │ -p default-k8s-diff-port-948988 --alsologtostderr -v=1                                                                                                                                                                                         │ default-k8s-diff-port-948988 │ jenkins │ v1.37.0 │ 18 Oct 25 12:26 UTC │ 18 Oct 25 12:26 UTC │
	│ image   │ embed-certs-270191 image list --format=json                                                                                                                                                                                                    │ embed-certs-270191           │ jenkins │ v1.37.0 │ 18 Oct 25 12:26 UTC │ 18 Oct 25 12:26 UTC │
	│ pause   │ -p embed-certs-270191 --alsologtostderr -v=1                                                                                                                                                                                                   │ embed-certs-270191           │ jenkins │ v1.37.0 │ 18 Oct 25 12:26 UTC │ 18 Oct 25 12:26 UTC │
	│ unpause │ -p embed-certs-270191 --alsologtostderr -v=1                                                                                                                                                                                                   │ embed-certs-270191           │ jenkins │ v1.37.0 │ 18 Oct 25 12:26 UTC │ 18 Oct 25 12:26 UTC │
	│ delete  │ -p embed-certs-270191                                                                                                                                                                                                                          │ embed-certs-270191           │ jenkins │ v1.37.0 │ 18 Oct 25 12:26 UTC │ 18 Oct 25 12:26 UTC │
	│ delete  │ -p embed-certs-270191                                                                                                                                                                                                                          │ embed-certs-270191           │ jenkins │ v1.37.0 │ 18 Oct 25 12:26 UTC │ 18 Oct 25 12:26 UTC │
	│ start   │ -p kindnet-720125 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=kindnet --driver=kvm2  --auto-update-drivers=false                                                                                                      │ kindnet-720125               │ jenkins │ v1.37.0 │ 18 Oct 25 12:26 UTC │                     │
	│ unpause │ -p default-k8s-diff-port-948988 --alsologtostderr -v=1                                                                                                                                                                                         │ default-k8s-diff-port-948988 │ jenkins │ v1.37.0 │ 18 Oct 25 12:26 UTC │ 18 Oct 25 12:26 UTC │
	└─────────┴────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┴──────────────────────────────┴─────────┴─────────┴─────────────────────┴─────────────
────────┘
	
	
	==> Last Start <==
	Log file created at: 2025/10/18 12:26:39
	Running on machine: ubuntu-20-agent-7
	Binary: Built with gc go1.24.6 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1018 12:26:39.638929   54024 out.go:360] Setting OutFile to fd 1 ...
	I1018 12:26:39.639215   54024 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1018 12:26:39.639226   54024 out.go:374] Setting ErrFile to fd 2...
	I1018 12:26:39.639232   54024 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1018 12:26:39.639463   54024 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21647-6010/.minikube/bin
	I1018 12:26:39.639986   54024 out.go:368] Setting JSON to false
	I1018 12:26:39.640948   54024 start.go:131] hostinfo: {"hostname":"ubuntu-20-agent-7","uptime":4147,"bootTime":1760786253,"procs":222,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"22.04","kernelVersion":"6.8.0-1041-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I1018 12:26:39.641036   54024 start.go:141] virtualization: kvm guest
	I1018 12:26:39.642912   54024 out.go:179] * [kindnet-720125] minikube v1.37.0 on Ubuntu 22.04 (kvm/amd64)
	I1018 12:26:39.644319   54024 notify.go:220] Checking for updates...
	I1018 12:26:39.644359   54024 out.go:179]   - MINIKUBE_LOCATION=21647
	I1018 12:26:39.645575   54024 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1018 12:26:39.646808   54024 out.go:179]   - KUBECONFIG=/home/jenkins/minikube-integration/21647-6010/kubeconfig
	I1018 12:26:39.647991   54024 out.go:179]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/21647-6010/.minikube
	I1018 12:26:39.649134   54024 out.go:179]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I1018 12:26:39.650480   54024 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I1018 12:26:39.652192   54024 config.go:182] Loaded profile config "auto-720125": Driver=kvm2, ContainerRuntime=docker, KubernetesVersion=v1.34.1
	I1018 12:26:39.652340   54024 config.go:182] Loaded profile config "default-k8s-diff-port-948988": Driver=kvm2, ContainerRuntime=docker, KubernetesVersion=v1.34.1
	I1018 12:26:39.652479   54024 config.go:182] Loaded profile config "newest-cni-661287": Driver=kvm2, ContainerRuntime=docker, KubernetesVersion=v1.34.1
	I1018 12:26:39.652597   54024 driver.go:421] Setting default libvirt URI to qemu:///system
	I1018 12:26:39.691700   54024 out.go:179] * Using the kvm2 driver based on user configuration
	I1018 12:26:39.692905   54024 start.go:305] selected driver: kvm2
	I1018 12:26:39.692920   54024 start.go:925] validating driver "kvm2" against <nil>
	I1018 12:26:39.692931   54024 start.go:936] status for kvm2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1018 12:26:39.693690   54024 install.go:66] acquiring lock: {Name:mk900956b073697a4aa6c80a27c6bb0742a99a53 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1018 12:26:39.693776   54024 install.go:138] Validating docker-machine-driver-kvm2, PATH=/home/jenkins/minikube-integration/21647-6010/.minikube/bin:/home/jenkins/workspace/KVM_Linux_integration/out/:/usr/local/bin:/usr/bin:/bin:/usr/local/games:/usr/games:/usr/local/go/bin:/home/jenkins/go/bin:/usr/local/bin/:/usr/local/go/bin/:/home/jenkins/go/bin
	I1018 12:26:39.709001   54024 install.go:163] /home/jenkins/workspace/KVM_Linux_integration/out/docker-machine-driver-kvm2 version is 1.37.0
	I1018 12:26:39.709030   54024 install.go:138] Validating docker-machine-driver-kvm2, PATH=/home/jenkins/minikube-integration/21647-6010/.minikube/bin:/home/jenkins/workspace/KVM_Linux_integration/out/:/usr/local/bin:/usr/bin:/bin:/usr/local/games:/usr/games:/usr/local/go/bin:/home/jenkins/go/bin:/usr/local/bin/:/usr/local/go/bin/:/home/jenkins/go/bin
	I1018 12:26:39.724060   54024 install.go:163] /home/jenkins/workspace/KVM_Linux_integration/out/docker-machine-driver-kvm2 version is 1.37.0
	I1018 12:26:39.724111   54024 start_flags.go:327] no existing cluster config was found, will generate one from the flags 
	I1018 12:26:39.724397   54024 start_flags.go:992] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1018 12:26:39.724424   54024 cni.go:84] Creating CNI manager for "kindnet"
	I1018 12:26:39.724429   54024 start_flags.go:336] Found "CNI" CNI - setting NetworkPlugin=cni
	I1018 12:26:39.724476   54024 start.go:349] cluster config:
	{Name:kindnet-720125 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760609789-21757@sha256:9824b20f4774128fcb298ad0e6cac7649729886cfba9d444b2305c743a5044c6 Memory:3072 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:kindnet-720125 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime
:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:kindnet} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgen
tPID:0 GPUs: AutoPauseInterval:1m0s}
	I1018 12:26:39.724562   54024 iso.go:125] acquiring lock: {Name:mk7b9977f44c882a06d0a932f05bd4c8e4cea871 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1018 12:26:39.726635   54024 out.go:179] * Starting "kindnet-720125" primary control-plane node in "kindnet-720125" cluster
	I1018 12:26:39.727995   54024 preload.go:183] Checking if preload exists for k8s version v1.34.1 and runtime docker
	I1018 12:26:39.728049   54024 preload.go:198] Found local preload: /home/jenkins/minikube-integration/21647-6010/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-docker-overlay2-amd64.tar.lz4
	I1018 12:26:39.728060   54024 cache.go:58] Caching tarball of preloaded images
	I1018 12:26:39.728181   54024 preload.go:233] Found /home/jenkins/minikube-integration/21647-6010/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-docker-overlay2-amd64.tar.lz4 in cache, skipping download
	I1018 12:26:39.728194   54024 cache.go:61] Finished verifying existence of preloaded tar for v1.34.1 on docker
	I1018 12:26:39.728350   54024 profile.go:143] Saving config to /home/jenkins/minikube-integration/21647-6010/.minikube/profiles/kindnet-720125/config.json ...
	I1018 12:26:39.728376   54024 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21647-6010/.minikube/profiles/kindnet-720125/config.json: {Name:mkf1b74ab9b12d679411e2c6e2e2149cae3e0078 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1018 12:26:39.728580   54024 start.go:360] acquireMachinesLock for kindnet-720125: {Name:mk547bbf69b426adc37163c0f135f5803e3e7ae0 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I1018 12:26:39.728617   54024 start.go:364] duration metric: took 19.75µs to acquireMachinesLock for "kindnet-720125"
	I1018 12:26:39.728642   54024 start.go:93] Provisioning new machine with config: &{Name:kindnet-720125 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/21757/minikube-v1.37.0-1760609724-21757-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760609789-21757@sha256:9824b20f4774128fcb298ad0e6cac7649729886cfba9d444b2305c743a5044c6 Memory:3072 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kube
rnetesVersion:v1.34.1 ClusterName:kindnet-720125 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:kindnet} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docke
r BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:docker ControlPlane:true Worker:true}
	I1018 12:26:39.728718   54024 start.go:125] createHost starting for "" (driver="kvm2")
	I1018 12:26:35.461906   52813 ssh_runner.go:235] Completed: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4: (1.481663654s)
	I1018 12:26:35.461943   52813 ssh_runner.go:146] rm: /preloaded.tar.lz4
	I1018 12:26:35.505542   52813 ssh_runner.go:195] Run: sudo cat /var/lib/docker/image/overlay2/repositories.json
	I1018 12:26:35.519942   52813 ssh_runner.go:362] scp memory --> /var/lib/docker/image/overlay2/repositories.json (2632 bytes)
	I1018 12:26:35.544751   52813 ssh_runner.go:195] Run: sudo systemctl reset-failed docker
	I1018 12:26:35.561575   52813 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1018 12:26:35.715918   52813 ssh_runner.go:195] Run: sudo systemctl restart docker
	I1018 12:26:38.056356   52813 ssh_runner.go:235] Completed: sudo systemctl restart docker: (2.34040401s)
	I1018 12:26:38.056485   52813 ssh_runner.go:195] Run: docker images --format {{.Repository}}:{{.Tag}}
	I1018 12:26:38.085796   52813 docker.go:691] Got preloaded images: -- stdout --
	registry.k8s.io/kube-scheduler:v1.34.1
	registry.k8s.io/kube-apiserver:v1.34.1
	registry.k8s.io/kube-controller-manager:v1.34.1
	registry.k8s.io/kube-proxy:v1.34.1
	registry.k8s.io/etcd:3.6.4-0
	registry.k8s.io/pause:3.10.1
	registry.k8s.io/coredns/coredns:v1.12.1
	gcr.io/k8s-minikube/storage-provisioner:v5
	
	-- /stdout --
	I1018 12:26:38.085832   52813 cache_images.go:85] Images are preloaded, skipping loading
	I1018 12:26:38.085846   52813 kubeadm.go:934] updating node { 192.168.72.13 8443 v1.34.1 docker true true} ...
	I1018 12:26:38.085985   52813 kubeadm.go:946] kubelet [Unit]
	Wants=docker.socket
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.34.1/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=auto-720125 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.72.13
	
	[Install]
	 config:
	{KubernetesVersion:v1.34.1 ClusterName:auto-720125 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I1018 12:26:38.086071   52813 ssh_runner.go:195] Run: docker info --format {{.CgroupDriver}}
	I1018 12:26:38.149565   52813 cni.go:84] Creating CNI manager for ""
	I1018 12:26:38.149605   52813 cni.go:158] "kvm2" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I1018 12:26:38.149622   52813 kubeadm.go:85] Using pod CIDR: 10.244.0.0/16
	I1018 12:26:38.149639   52813 kubeadm.go:190] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.72.13 APIServerPort:8443 KubernetesVersion:v1.34.1 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:auto-720125 NodeName:auto-720125 DNSDomain:cluster.local CRISocket:/var/run/cri-dockerd.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.72.13"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.72.13 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/etc/kubern
etes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/cri-dockerd.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I1018 12:26:38.149863   52813 kubeadm.go:196] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.72.13
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/cri-dockerd.sock
	  name: "auto-720125"
	  kubeletExtraArgs:
	    - name: "node-ip"
	      value: "192.168.72.13"
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.72.13"]
	  extraArgs:
	    - name: "enable-admission-plugins"
	      value: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    - name: "allocate-node-cidrs"
	      value: "true"
	    - name: "leader-elect"
	      value: "false"
	scheduler:
	  extraArgs:
	    - name: "leader-elect"
	      value: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	kubernetesVersion: v1.34.1
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/cri-dockerd.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1018 12:26:38.149950   52813 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.34.1
	I1018 12:26:38.167666   52813 binaries.go:44] Found k8s binaries, skipping transfer
	I1018 12:26:38.167750   52813 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I1018 12:26:38.182469   52813 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (311 bytes)
	I1018 12:26:38.210498   52813 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I1018 12:26:38.235674   52813 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2215 bytes)
	I1018 12:26:38.272656   52813 ssh_runner.go:195] Run: grep 192.168.72.13	control-plane.minikube.internal$ /etc/hosts
	I1018 12:26:38.278428   52813 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.72.13	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1018 12:26:38.295186   52813 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1018 12:26:38.477493   52813 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1018 12:26:38.516693   52813 certs.go:69] Setting up /home/jenkins/minikube-integration/21647-6010/.minikube/profiles/auto-720125 for IP: 192.168.72.13
	I1018 12:26:38.516721   52813 certs.go:195] generating shared ca certs ...
	I1018 12:26:38.516742   52813 certs.go:227] acquiring lock for ca certs: {Name:mk4e9b668d7f4a08d373c26a5a5beadd4b363eae Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1018 12:26:38.516897   52813 certs.go:236] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/21647-6010/.minikube/ca.key
	I1018 12:26:38.516956   52813 certs.go:236] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/21647-6010/.minikube/proxy-client-ca.key
	I1018 12:26:38.516971   52813 certs.go:257] generating profile certs ...
	I1018 12:26:38.517059   52813 certs.go:364] generating signed profile cert for "minikube-user": /home/jenkins/minikube-integration/21647-6010/.minikube/profiles/auto-720125/client.key
	I1018 12:26:38.517080   52813 crypto.go:68] Generating cert /home/jenkins/minikube-integration/21647-6010/.minikube/profiles/auto-720125/client.crt with IP's: []
	I1018 12:26:38.795006   52813 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/21647-6010/.minikube/profiles/auto-720125/client.crt ...
	I1018 12:26:38.795041   52813 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21647-6010/.minikube/profiles/auto-720125/client.crt: {Name:mke50b87cc8afab1bea24439b2b8f8b4fce785c7 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1018 12:26:38.795221   52813 crypto.go:164] Writing key to /home/jenkins/minikube-integration/21647-6010/.minikube/profiles/auto-720125/client.key ...
	I1018 12:26:38.795236   52813 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21647-6010/.minikube/profiles/auto-720125/client.key: {Name:mk73a13799ed8cba8c6cf5586dd849d9aa3376fc Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1018 12:26:38.795369   52813 certs.go:364] generating signed profile cert for "minikube": /home/jenkins/minikube-integration/21647-6010/.minikube/profiles/auto-720125/apiserver.key.5f192319
	I1018 12:26:38.795387   52813 crypto.go:68] Generating cert /home/jenkins/minikube-integration/21647-6010/.minikube/profiles/auto-720125/apiserver.crt.5f192319 with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 192.168.72.13]
	I1018 12:26:39.015985   52813 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/21647-6010/.minikube/profiles/auto-720125/apiserver.crt.5f192319 ...
	I1018 12:26:39.016017   52813 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21647-6010/.minikube/profiles/auto-720125/apiserver.crt.5f192319: {Name:mk48dc89d0bc936861c01af4faa11afa9b99fc7f Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1018 12:26:39.016173   52813 crypto.go:164] Writing key to /home/jenkins/minikube-integration/21647-6010/.minikube/profiles/auto-720125/apiserver.key.5f192319 ...
	I1018 12:26:39.016187   52813 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21647-6010/.minikube/profiles/auto-720125/apiserver.key.5f192319: {Name:mk06903a8537a759ab5885d9e1ce94cdbffcbf0b Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1018 12:26:39.016265   52813 certs.go:382] copying /home/jenkins/minikube-integration/21647-6010/.minikube/profiles/auto-720125/apiserver.crt.5f192319 -> /home/jenkins/minikube-integration/21647-6010/.minikube/profiles/auto-720125/apiserver.crt
	I1018 12:26:39.016371   52813 certs.go:386] copying /home/jenkins/minikube-integration/21647-6010/.minikube/profiles/auto-720125/apiserver.key.5f192319 -> /home/jenkins/minikube-integration/21647-6010/.minikube/profiles/auto-720125/apiserver.key
	I1018 12:26:39.016432   52813 certs.go:364] generating signed profile cert for "aggregator": /home/jenkins/minikube-integration/21647-6010/.minikube/profiles/auto-720125/proxy-client.key
	I1018 12:26:39.016447   52813 crypto.go:68] Generating cert /home/jenkins/minikube-integration/21647-6010/.minikube/profiles/auto-720125/proxy-client.crt with IP's: []
	I1018 12:26:39.194387   52813 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/21647-6010/.minikube/profiles/auto-720125/proxy-client.crt ...
	I1018 12:26:39.194419   52813 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21647-6010/.minikube/profiles/auto-720125/proxy-client.crt: {Name:mk9243a20439ab9292d13a3cab98b56367a296c4 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1018 12:26:39.194631   52813 crypto.go:164] Writing key to /home/jenkins/minikube-integration/21647-6010/.minikube/profiles/auto-720125/proxy-client.key ...
	I1018 12:26:39.194649   52813 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21647-6010/.minikube/profiles/auto-720125/proxy-client.key: {Name:mk548ef445e4b58857c8694e04881f9da155116e Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1018 12:26:39.194883   52813 certs.go:484] found cert: /home/jenkins/minikube-integration/21647-6010/.minikube/certs/9909.pem (1338 bytes)
	W1018 12:26:39.194965   52813 certs.go:480] ignoring /home/jenkins/minikube-integration/21647-6010/.minikube/certs/9909_empty.pem, impossibly tiny 0 bytes
	I1018 12:26:39.194982   52813 certs.go:484] found cert: /home/jenkins/minikube-integration/21647-6010/.minikube/certs/ca-key.pem (1679 bytes)
	I1018 12:26:39.195016   52813 certs.go:484] found cert: /home/jenkins/minikube-integration/21647-6010/.minikube/certs/ca.pem (1082 bytes)
	I1018 12:26:39.195051   52813 certs.go:484] found cert: /home/jenkins/minikube-integration/21647-6010/.minikube/certs/cert.pem (1123 bytes)
	I1018 12:26:39.195083   52813 certs.go:484] found cert: /home/jenkins/minikube-integration/21647-6010/.minikube/certs/key.pem (1679 bytes)
	I1018 12:26:39.195138   52813 certs.go:484] found cert: /home/jenkins/minikube-integration/21647-6010/.minikube/files/etc/ssl/certs/99092.pem (1708 bytes)
	I1018 12:26:39.195753   52813 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21647-6010/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1018 12:26:39.237771   52813 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21647-6010/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I1018 12:26:39.273475   52813 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21647-6010/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1018 12:26:39.304754   52813 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21647-6010/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I1018 12:26:39.340590   52813 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21647-6010/.minikube/profiles/auto-720125/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1415 bytes)
	I1018 12:26:39.375528   52813 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21647-6010/.minikube/profiles/auto-720125/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I1018 12:26:39.408845   52813 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21647-6010/.minikube/profiles/auto-720125/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1018 12:26:39.442920   52813 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21647-6010/.minikube/profiles/auto-720125/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I1018 12:26:39.481085   52813 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21647-6010/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1018 12:26:39.516586   52813 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21647-6010/.minikube/certs/9909.pem --> /usr/share/ca-certificates/9909.pem (1338 bytes)
	I1018 12:26:39.554538   52813 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21647-6010/.minikube/files/etc/ssl/certs/99092.pem --> /usr/share/ca-certificates/99092.pem (1708 bytes)
	I1018 12:26:39.594522   52813 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I1018 12:26:39.619184   52813 ssh_runner.go:195] Run: openssl version
	I1018 12:26:39.626356   52813 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I1018 12:26:39.640801   52813 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1018 12:26:39.646535   52813 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Oct 18 11:29 /usr/share/ca-certificates/minikubeCA.pem
	I1018 12:26:39.646588   52813 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1018 12:26:39.654893   52813 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I1018 12:26:39.669539   52813 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/9909.pem && ln -fs /usr/share/ca-certificates/9909.pem /etc/ssl/certs/9909.pem"
	I1018 12:26:39.684162   52813 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/9909.pem
	I1018 12:26:39.689731   52813 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Oct 18 11:35 /usr/share/ca-certificates/9909.pem
	I1018 12:26:39.689790   52813 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/9909.pem
	I1018 12:26:39.697600   52813 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/9909.pem /etc/ssl/certs/51391683.0"
	I1018 12:26:39.714166   52813 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/99092.pem && ln -fs /usr/share/ca-certificates/99092.pem /etc/ssl/certs/99092.pem"
	I1018 12:26:39.729837   52813 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/99092.pem
	I1018 12:26:39.735419   52813 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Oct 18 11:35 /usr/share/ca-certificates/99092.pem
	I1018 12:26:39.735488   52813 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/99092.pem
	I1018 12:26:39.743203   52813 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/99092.pem /etc/ssl/certs/3ec20f2e.0"
	I1018 12:26:39.758932   52813 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1018 12:26:39.765101   52813 certs.go:400] 'apiserver-kubelet-client' cert doesn't exist, likely first start: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/certs/apiserver-kubelet-client.crt': No such file or directory
	I1018 12:26:39.765169   52813 kubeadm.go:400] StartCluster: {Name:auto-720125 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/21757/minikube-v1.37.0-1760609724-21757-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760609789-21757@sha256:9824b20f4774128fcb298ad0e6cac7649729886cfba9d444b2305c743a5044c6 Memory:3072 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 Clu
sterName:auto-720125 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.72.13 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: Disabl
eOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1018 12:26:39.765332   52813 ssh_runner.go:195] Run: docker ps --filter status=paused --filter=name=k8s_.*_(kube-system)_ --format={{.ID}}
	I1018 12:26:39.785247   52813 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I1018 12:26:39.798374   52813 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I1018 12:26:39.810946   52813 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I1018 12:26:39.825029   52813 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I1018 12:26:39.825056   52813 kubeadm.go:157] found existing configuration files:
	
	I1018 12:26:39.825096   52813 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I1018 12:26:39.836919   52813 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I1018 12:26:39.836997   52813 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I1018 12:26:39.849872   52813 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I1018 12:26:39.861692   52813 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I1018 12:26:39.861767   52813 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I1018 12:26:39.877485   52813 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I1018 12:26:39.890697   52813 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I1018 12:26:39.890777   52813 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I1018 12:26:39.906568   52813 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I1018 12:26:39.920626   52813 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I1018 12:26:39.920740   52813 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I1018 12:26:39.936398   52813 ssh_runner.go:286] Start: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.34.1:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem"
	I1018 12:26:39.998219   52813 kubeadm.go:318] [init] Using Kubernetes version: v1.34.1
	I1018 12:26:39.998340   52813 kubeadm.go:318] [preflight] Running pre-flight checks
	I1018 12:26:40.111469   52813 kubeadm.go:318] [preflight] Pulling images required for setting up a Kubernetes cluster
	I1018 12:26:40.111618   52813 kubeadm.go:318] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I1018 12:26:40.111795   52813 kubeadm.go:318] [preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	I1018 12:26:40.128525   52813 kubeadm.go:318] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I1018 12:26:40.130607   52813 out.go:252]   - Generating certificates and keys ...
	I1018 12:26:40.130710   52813 kubeadm.go:318] [certs] Using existing ca certificate authority
	I1018 12:26:40.130803   52813 kubeadm.go:318] [certs] Using existing apiserver certificate and key on disk
	I1018 12:26:40.350726   52813 kubeadm.go:318] [certs] Generating "apiserver-kubelet-client" certificate and key
	I1018 12:26:40.455768   52813 kubeadm.go:318] [certs] Generating "front-proxy-ca" certificate and key
	I1018 12:26:40.598243   52813 kubeadm.go:318] [certs] Generating "front-proxy-client" certificate and key
	I1018 12:26:41.011504   52813 kubeadm.go:318] [certs] Generating "etcd/ca" certificate and key
	I1018 12:26:41.091757   52813 kubeadm.go:318] [certs] Generating "etcd/server" certificate and key
	I1018 12:26:41.092141   52813 kubeadm.go:318] [certs] etcd/server serving cert is signed for DNS names [auto-720125 localhost] and IPs [192.168.72.13 127.0.0.1 ::1]
	I1018 12:26:41.376370   52813 kubeadm.go:318] [certs] Generating "etcd/peer" certificate and key
	I1018 12:26:41.376756   52813 kubeadm.go:318] [certs] etcd/peer serving cert is signed for DNS names [auto-720125 localhost] and IPs [192.168.72.13 127.0.0.1 ::1]
	I1018 12:26:41.679155   52813 kubeadm.go:318] [certs] Generating "etcd/healthcheck-client" certificate and key
	I1018 12:26:41.832796   52813 kubeadm.go:318] [certs] Generating "apiserver-etcd-client" certificate and key
	I1018 12:26:42.091476   52813 kubeadm.go:318] [certs] Generating "sa" key and public key
	I1018 12:26:42.091617   52813 kubeadm.go:318] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I1018 12:26:42.555206   52813 kubeadm.go:318] [kubeconfig] Writing "admin.conf" kubeconfig file
	I1018 12:26:42.822944   52813 kubeadm.go:318] [kubeconfig] Writing "super-admin.conf" kubeconfig file
	I1018 12:26:43.272107   52813 kubeadm.go:318] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I1018 12:26:43.527688   52813 kubeadm.go:318] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I1018 12:26:43.769537   52813 kubeadm.go:318] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I1018 12:26:43.770332   52813 kubeadm.go:318] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I1018 12:26:43.773363   52813 kubeadm.go:318] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I1018 12:26:39.521607   52283 api_server.go:269] stopped: https://192.168.39.140:8443/healthz: Get "https://192.168.39.140:8443/healthz": read tcp 192.168.39.1:35984->192.168.39.140:8443: read: connection reset by peer
	I1018 12:26:39.521660   52283 api_server.go:253] Checking apiserver healthz at https://192.168.39.140:8443/healthz ...
	I1018 12:26:39.522161   52283 api_server.go:269] stopped: https://192.168.39.140:8443/healthz: Get "https://192.168.39.140:8443/healthz": dial tcp 192.168.39.140:8443: connect: connection refused
	I1018 12:26:39.940469   52283 api_server.go:253] Checking apiserver healthz at https://192.168.39.140:8443/healthz ...
	I1018 12:26:39.941178   52283 api_server.go:269] stopped: https://192.168.39.140:8443/healthz: Get "https://192.168.39.140:8443/healthz": dial tcp 192.168.39.140:8443: connect: connection refused
	I1018 12:26:40.440329   52283 api_server.go:253] Checking apiserver healthz at https://192.168.39.140:8443/healthz ...
	I1018 12:26:40.441012   52283 api_server.go:269] stopped: https://192.168.39.140:8443/healthz: Get "https://192.168.39.140:8443/healthz": dial tcp 192.168.39.140:8443: connect: connection refused
	I1018 12:26:40.940495   52283 api_server.go:253] Checking apiserver healthz at https://192.168.39.140:8443/healthz ...
	I1018 12:26:40.941051   52283 api_server.go:269] stopped: https://192.168.39.140:8443/healthz: Get "https://192.168.39.140:8443/healthz": dial tcp 192.168.39.140:8443: connect: connection refused
	I1018 12:26:41.440547   52283 api_server.go:253] Checking apiserver healthz at https://192.168.39.140:8443/healthz ...
	I1018 12:26:41.441243   52283 api_server.go:269] stopped: https://192.168.39.140:8443/healthz: Get "https://192.168.39.140:8443/healthz": dial tcp 192.168.39.140:8443: connect: connection refused
	I1018 12:26:41.939828   52283 api_server.go:253] Checking apiserver healthz at https://192.168.39.140:8443/healthz ...
	I1018 12:26:41.940532   52283 api_server.go:269] stopped: https://192.168.39.140:8443/healthz: Get "https://192.168.39.140:8443/healthz": dial tcp 192.168.39.140:8443: connect: connection refused
	I1018 12:26:42.440175   52283 api_server.go:253] Checking apiserver healthz at https://192.168.39.140:8443/healthz ...
	I1018 12:26:42.440815   52283 api_server.go:269] stopped: https://192.168.39.140:8443/healthz: Get "https://192.168.39.140:8443/healthz": dial tcp 192.168.39.140:8443: connect: connection refused
	I1018 12:26:42.940483   52283 api_server.go:253] Checking apiserver healthz at https://192.168.39.140:8443/healthz ...
	I1018 12:26:42.941097   52283 api_server.go:269] stopped: https://192.168.39.140:8443/healthz: Get "https://192.168.39.140:8443/healthz": dial tcp 192.168.39.140:8443: connect: connection refused
	I1018 12:26:43.439852   52283 api_server.go:253] Checking apiserver healthz at https://192.168.39.140:8443/healthz ...
	I1018 12:26:43.440639   52283 api_server.go:269] stopped: https://192.168.39.140:8443/healthz: Get "https://192.168.39.140:8443/healthz": dial tcp 192.168.39.140:8443: connect: connection refused
	I1018 12:26:43.940431   52283 api_server.go:253] Checking apiserver healthz at https://192.168.39.140:8443/healthz ...
	I1018 12:26:43.941130   52283 api_server.go:269] stopped: https://192.168.39.140:8443/healthz: Get "https://192.168.39.140:8443/healthz": dial tcp 192.168.39.140:8443: connect: connection refused
	I1018 12:26:39.730484   54024 out.go:252] * Creating kvm2 VM (CPUs=2, Memory=3072MB, Disk=20000MB) ...
	I1018 12:26:39.730631   54024 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_integration/out/docker-machine-driver-kvm2
	I1018 12:26:39.730675   54024 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1018 12:26:39.746220   54024 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:38367
	I1018 12:26:39.746691   54024 main.go:141] libmachine: () Calling .GetVersion
	I1018 12:26:39.747252   54024 main.go:141] libmachine: Using API Version  1
	I1018 12:26:39.747278   54024 main.go:141] libmachine: () Calling .SetConfigRaw
	I1018 12:26:39.747712   54024 main.go:141] libmachine: () Calling .GetMachineName
	I1018 12:26:39.747910   54024 main.go:141] libmachine: (kindnet-720125) Calling .GetMachineName
	I1018 12:26:39.748157   54024 main.go:141] libmachine: (kindnet-720125) Calling .DriverName
	I1018 12:26:39.748327   54024 start.go:159] libmachine.API.Create for "kindnet-720125" (driver="kvm2")
	I1018 12:26:39.748358   54024 client.go:168] LocalClient.Create starting
	I1018 12:26:39.748391   54024 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/21647-6010/.minikube/certs/ca.pem
	I1018 12:26:39.748425   54024 main.go:141] libmachine: Decoding PEM data...
	I1018 12:26:39.748441   54024 main.go:141] libmachine: Parsing certificate...
	I1018 12:26:39.748493   54024 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/21647-6010/.minikube/certs/cert.pem
	I1018 12:26:39.748514   54024 main.go:141] libmachine: Decoding PEM data...
	I1018 12:26:39.748527   54024 main.go:141] libmachine: Parsing certificate...
	I1018 12:26:39.748542   54024 main.go:141] libmachine: Running pre-create checks...
	I1018 12:26:39.748555   54024 main.go:141] libmachine: (kindnet-720125) Calling .PreCreateCheck
	I1018 12:26:39.748883   54024 main.go:141] libmachine: (kindnet-720125) Calling .GetConfigRaw
	I1018 12:26:39.749274   54024 main.go:141] libmachine: Creating machine...
	I1018 12:26:39.749304   54024 main.go:141] libmachine: (kindnet-720125) Calling .Create
	I1018 12:26:39.749445   54024 main.go:141] libmachine: (kindnet-720125) creating domain...
	I1018 12:26:39.749466   54024 main.go:141] libmachine: (kindnet-720125) creating network...
	I1018 12:26:39.750975   54024 main.go:141] libmachine: (kindnet-720125) DBG | found existing default network
	I1018 12:26:39.751279   54024 main.go:141] libmachine: (kindnet-720125) DBG | <network connections='3'>
	I1018 12:26:39.751320   54024 main.go:141] libmachine: (kindnet-720125) DBG |   <name>default</name>
	I1018 12:26:39.751345   54024 main.go:141] libmachine: (kindnet-720125) DBG |   <uuid>c61344c2-dba2-46dd-a21a-34776d235985</uuid>
	I1018 12:26:39.751362   54024 main.go:141] libmachine: (kindnet-720125) DBG |   <forward mode='nat'>
	I1018 12:26:39.751384   54024 main.go:141] libmachine: (kindnet-720125) DBG |     <nat>
	I1018 12:26:39.751398   54024 main.go:141] libmachine: (kindnet-720125) DBG |       <port start='1024' end='65535'/>
	I1018 12:26:39.751406   54024 main.go:141] libmachine: (kindnet-720125) DBG |     </nat>
	I1018 12:26:39.751412   54024 main.go:141] libmachine: (kindnet-720125) DBG |   </forward>
	I1018 12:26:39.751448   54024 main.go:141] libmachine: (kindnet-720125) DBG |   <bridge name='virbr0' stp='on' delay='0'/>
	I1018 12:26:39.751488   54024 main.go:141] libmachine: (kindnet-720125) DBG |   <mac address='52:54:00:10:a2:1d'/>
	I1018 12:26:39.751506   54024 main.go:141] libmachine: (kindnet-720125) DBG |   <ip address='192.168.122.1' netmask='255.255.255.0'>
	I1018 12:26:39.751517   54024 main.go:141] libmachine: (kindnet-720125) DBG |     <dhcp>
	I1018 12:26:39.751527   54024 main.go:141] libmachine: (kindnet-720125) DBG |       <range start='192.168.122.2' end='192.168.122.254'/>
	I1018 12:26:39.751535   54024 main.go:141] libmachine: (kindnet-720125) DBG |     </dhcp>
	I1018 12:26:39.751543   54024 main.go:141] libmachine: (kindnet-720125) DBG |   </ip>
	I1018 12:26:39.751557   54024 main.go:141] libmachine: (kindnet-720125) DBG | </network>
	I1018 12:26:39.751576   54024 main.go:141] libmachine: (kindnet-720125) DBG | 
	I1018 12:26:39.752366   54024 main.go:141] libmachine: (kindnet-720125) DBG | I1018 12:26:39.752168   54053 network.go:211] skipping subnet 192.168.39.0/24 that is taken: &{IP:192.168.39.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.39.0/24 Gateway:192.168.39.1 ClientMin:192.168.39.2 ClientMax:192.168.39.254 Broadcast:192.168.39.255 IsPrivate:true Interface:{IfaceName:virbr1 IfaceIPv4:192.168.39.1 IfaceMTU:1500 IfaceMAC:52:54:00:82:24:f4} reservation:<nil>}
	I1018 12:26:39.753108   54024 main.go:141] libmachine: (kindnet-720125) DBG | I1018 12:26:39.753033   54053 network.go:206] using free private subnet 192.168.50.0/24: &{IP:192.168.50.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.50.0/24 Gateway:192.168.50.1 ClientMin:192.168.50.2 ClientMax:192.168.50.254 Broadcast:192.168.50.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:0xc000260370}
	I1018 12:26:39.753127   54024 main.go:141] libmachine: (kindnet-720125) DBG | defining private network:
	I1018 12:26:39.753137   54024 main.go:141] libmachine: (kindnet-720125) DBG | 
	I1018 12:26:39.753143   54024 main.go:141] libmachine: (kindnet-720125) DBG | <network>
	I1018 12:26:39.753152   54024 main.go:141] libmachine: (kindnet-720125) DBG |   <name>mk-kindnet-720125</name>
	I1018 12:26:39.753159   54024 main.go:141] libmachine: (kindnet-720125) DBG |   <dns enable='no'/>
	I1018 12:26:39.753168   54024 main.go:141] libmachine: (kindnet-720125) DBG |   <ip address='192.168.50.1' netmask='255.255.255.0'>
	I1018 12:26:39.753175   54024 main.go:141] libmachine: (kindnet-720125) DBG |     <dhcp>
	I1018 12:26:39.753184   54024 main.go:141] libmachine: (kindnet-720125) DBG |       <range start='192.168.50.2' end='192.168.50.253'/>
	I1018 12:26:39.753190   54024 main.go:141] libmachine: (kindnet-720125) DBG |     </dhcp>
	I1018 12:26:39.753213   54024 main.go:141] libmachine: (kindnet-720125) DBG |   </ip>
	I1018 12:26:39.753246   54024 main.go:141] libmachine: (kindnet-720125) DBG | </network>
	I1018 12:26:39.753262   54024 main.go:141] libmachine: (kindnet-720125) DBG | 
	I1018 12:26:39.759190   54024 main.go:141] libmachine: (kindnet-720125) DBG | creating private network mk-kindnet-720125 192.168.50.0/24...
	I1018 12:26:39.842530   54024 main.go:141] libmachine: (kindnet-720125) DBG | private network mk-kindnet-720125 192.168.50.0/24 created
	I1018 12:26:39.842829   54024 main.go:141] libmachine: (kindnet-720125) DBG | <network>
	I1018 12:26:39.842844   54024 main.go:141] libmachine: (kindnet-720125) DBG |   <name>mk-kindnet-720125</name>
	I1018 12:26:39.842855   54024 main.go:141] libmachine: (kindnet-720125) DBG |   <uuid>57af09bd-510d-4d07-b5da-0d64b9c8c775</uuid>
	I1018 12:26:39.842865   54024 main.go:141] libmachine: (kindnet-720125) setting up store path in /home/jenkins/minikube-integration/21647-6010/.minikube/machines/kindnet-720125 ...
	I1018 12:26:39.842873   54024 main.go:141] libmachine: (kindnet-720125) DBG |   <bridge name='virbr2' stp='on' delay='0'/>
	I1018 12:26:39.842883   54024 main.go:141] libmachine: (kindnet-720125) DBG |   <mac address='52:54:00:4a:b8:f3'/>
	I1018 12:26:39.842890   54024 main.go:141] libmachine: (kindnet-720125) DBG |   <dns enable='no'/>
	I1018 12:26:39.842900   54024 main.go:141] libmachine: (kindnet-720125) DBG |   <ip address='192.168.50.1' netmask='255.255.255.0'>
	I1018 12:26:39.842912   54024 main.go:141] libmachine: (kindnet-720125) DBG |     <dhcp>
	I1018 12:26:39.842920   54024 main.go:141] libmachine: (kindnet-720125) DBG |       <range start='192.168.50.2' end='192.168.50.253'/>
	I1018 12:26:39.842926   54024 main.go:141] libmachine: (kindnet-720125) DBG |     </dhcp>
	I1018 12:26:39.842937   54024 main.go:141] libmachine: (kindnet-720125) building disk image from file:///home/jenkins/minikube-integration/21647-6010/.minikube/cache/iso/amd64/minikube-v1.37.0-1760609724-21757-amd64.iso
	I1018 12:26:39.842947   54024 main.go:141] libmachine: (kindnet-720125) DBG |   </ip>
	I1018 12:26:39.842958   54024 main.go:141] libmachine: (kindnet-720125) DBG | </network>
	I1018 12:26:39.842975   54024 main.go:141] libmachine: (kindnet-720125) Downloading /home/jenkins/minikube-integration/21647-6010/.minikube/cache/boot2docker.iso from file:///home/jenkins/minikube-integration/21647-6010/.minikube/cache/iso/amd64/minikube-v1.37.0-1760609724-21757-amd64.iso...
	I1018 12:26:39.842995   54024 main.go:141] libmachine: (kindnet-720125) DBG | 
	I1018 12:26:39.843018   54024 main.go:141] libmachine: (kindnet-720125) DBG | I1018 12:26:39.842834   54053 common.go:144] Making disk image using store path: /home/jenkins/minikube-integration/21647-6010/.minikube
	I1018 12:26:40.099390   54024 main.go:141] libmachine: (kindnet-720125) DBG | I1018 12:26:40.099247   54053 common.go:151] Creating ssh key: /home/jenkins/minikube-integration/21647-6010/.minikube/machines/kindnet-720125/id_rsa...
	I1018 12:26:40.381985   54024 main.go:141] libmachine: (kindnet-720125) DBG | I1018 12:26:40.381830   54053 common.go:157] Creating raw disk image: /home/jenkins/minikube-integration/21647-6010/.minikube/machines/kindnet-720125/kindnet-720125.rawdisk...
	I1018 12:26:40.382025   54024 main.go:141] libmachine: (kindnet-720125) DBG | Writing magic tar header
	I1018 12:26:40.382039   54024 main.go:141] libmachine: (kindnet-720125) DBG | Writing SSH key tar header
	I1018 12:26:40.382049   54024 main.go:141] libmachine: (kindnet-720125) DBG | I1018 12:26:40.381994   54053 common.go:171] Fixing permissions on /home/jenkins/minikube-integration/21647-6010/.minikube/machines/kindnet-720125 ...
	I1018 12:26:40.382145   54024 main.go:141] libmachine: (kindnet-720125) DBG | checking permissions on dir: /home/jenkins/minikube-integration/21647-6010/.minikube/machines/kindnet-720125
	I1018 12:26:40.382185   54024 main.go:141] libmachine: (kindnet-720125) setting executable bit set on /home/jenkins/minikube-integration/21647-6010/.minikube/machines/kindnet-720125 (perms=drwx------)
	I1018 12:26:40.382204   54024 main.go:141] libmachine: (kindnet-720125) setting executable bit set on /home/jenkins/minikube-integration/21647-6010/.minikube/machines (perms=drwxr-xr-x)
	I1018 12:26:40.382225   54024 main.go:141] libmachine: (kindnet-720125) DBG | checking permissions on dir: /home/jenkins/minikube-integration/21647-6010/.minikube/machines
	I1018 12:26:40.382245   54024 main.go:141] libmachine: (kindnet-720125) DBG | checking permissions on dir: /home/jenkins/minikube-integration/21647-6010/.minikube
	I1018 12:26:40.382257   54024 main.go:141] libmachine: (kindnet-720125) DBG | checking permissions on dir: /home/jenkins/minikube-integration/21647-6010
	I1018 12:26:40.382268   54024 main.go:141] libmachine: (kindnet-720125) DBG | checking permissions on dir: /home/jenkins/minikube-integration
	I1018 12:26:40.382278   54024 main.go:141] libmachine: (kindnet-720125) DBG | checking permissions on dir: /home/jenkins
	I1018 12:26:40.382302   54024 main.go:141] libmachine: (kindnet-720125) setting executable bit set on /home/jenkins/minikube-integration/21647-6010/.minikube (perms=drwxr-xr-x)
	I1018 12:26:40.382314   54024 main.go:141] libmachine: (kindnet-720125) setting executable bit set on /home/jenkins/minikube-integration/21647-6010 (perms=drwxrwxr-x)
	I1018 12:26:40.382334   54024 main.go:141] libmachine: (kindnet-720125) DBG | checking permissions on dir: /home
	I1018 12:26:40.382345   54024 main.go:141] libmachine: (kindnet-720125) DBG | skipping /home - not owner
	I1018 12:26:40.382356   54024 main.go:141] libmachine: (kindnet-720125) setting executable bit set on /home/jenkins/minikube-integration (perms=drwxrwxr-x)
	I1018 12:26:40.382367   54024 main.go:141] libmachine: (kindnet-720125) setting executable bit set on /home/jenkins (perms=drwxr-xr-x)
	I1018 12:26:40.382376   54024 main.go:141] libmachine: (kindnet-720125) defining domain...
	I1018 12:26:40.383798   54024 main.go:141] libmachine: (kindnet-720125) defining domain using XML: 
	I1018 12:26:40.383831   54024 main.go:141] libmachine: (kindnet-720125) <domain type='kvm'>
	I1018 12:26:40.383842   54024 main.go:141] libmachine: (kindnet-720125)   <name>kindnet-720125</name>
	I1018 12:26:40.383853   54024 main.go:141] libmachine: (kindnet-720125)   <memory unit='MiB'>3072</memory>
	I1018 12:26:40.383858   54024 main.go:141] libmachine: (kindnet-720125)   <vcpu>2</vcpu>
	I1018 12:26:40.383862   54024 main.go:141] libmachine: (kindnet-720125)   <features>
	I1018 12:26:40.383867   54024 main.go:141] libmachine: (kindnet-720125)     <acpi/>
	I1018 12:26:40.383875   54024 main.go:141] libmachine: (kindnet-720125)     <apic/>
	I1018 12:26:40.383882   54024 main.go:141] libmachine: (kindnet-720125)     <pae/>
	I1018 12:26:40.383886   54024 main.go:141] libmachine: (kindnet-720125)   </features>
	I1018 12:26:40.383891   54024 main.go:141] libmachine: (kindnet-720125)   <cpu mode='host-passthrough'>
	I1018 12:26:40.383898   54024 main.go:141] libmachine: (kindnet-720125)   </cpu>
	I1018 12:26:40.383905   54024 main.go:141] libmachine: (kindnet-720125)   <os>
	I1018 12:26:40.383916   54024 main.go:141] libmachine: (kindnet-720125)     <type>hvm</type>
	I1018 12:26:40.383924   54024 main.go:141] libmachine: (kindnet-720125)     <boot dev='cdrom'/>
	I1018 12:26:40.383934   54024 main.go:141] libmachine: (kindnet-720125)     <boot dev='hd'/>
	I1018 12:26:40.383944   54024 main.go:141] libmachine: (kindnet-720125)     <bootmenu enable='no'/>
	I1018 12:26:40.383948   54024 main.go:141] libmachine: (kindnet-720125)   </os>
	I1018 12:26:40.383953   54024 main.go:141] libmachine: (kindnet-720125)   <devices>
	I1018 12:26:40.383957   54024 main.go:141] libmachine: (kindnet-720125)     <disk type='file' device='cdrom'>
	I1018 12:26:40.383997   54024 main.go:141] libmachine: (kindnet-720125)       <source file='/home/jenkins/minikube-integration/21647-6010/.minikube/machines/kindnet-720125/boot2docker.iso'/>
	I1018 12:26:40.384023   54024 main.go:141] libmachine: (kindnet-720125)       <target dev='hdc' bus='scsi'/>
	I1018 12:26:40.384037   54024 main.go:141] libmachine: (kindnet-720125)       <readonly/>
	I1018 12:26:40.384051   54024 main.go:141] libmachine: (kindnet-720125)     </disk>
	I1018 12:26:40.384065   54024 main.go:141] libmachine: (kindnet-720125)     <disk type='file' device='disk'>
	I1018 12:26:40.384079   54024 main.go:141] libmachine: (kindnet-720125)       <driver name='qemu' type='raw' cache='default' io='threads' />
	I1018 12:26:40.384096   54024 main.go:141] libmachine: (kindnet-720125)       <source file='/home/jenkins/minikube-integration/21647-6010/.minikube/machines/kindnet-720125/kindnet-720125.rawdisk'/>
	I1018 12:26:40.384108   54024 main.go:141] libmachine: (kindnet-720125)       <target dev='hda' bus='virtio'/>
	I1018 12:26:40.384119   54024 main.go:141] libmachine: (kindnet-720125)     </disk>
	I1018 12:26:40.384133   54024 main.go:141] libmachine: (kindnet-720125)     <interface type='network'>
	I1018 12:26:40.384146   54024 main.go:141] libmachine: (kindnet-720125)       <source network='mk-kindnet-720125'/>
	I1018 12:26:40.384157   54024 main.go:141] libmachine: (kindnet-720125)       <model type='virtio'/>
	I1018 12:26:40.384168   54024 main.go:141] libmachine: (kindnet-720125)     </interface>
	I1018 12:26:40.384179   54024 main.go:141] libmachine: (kindnet-720125)     <interface type='network'>
	I1018 12:26:40.384192   54024 main.go:141] libmachine: (kindnet-720125)       <source network='default'/>
	I1018 12:26:40.384202   54024 main.go:141] libmachine: (kindnet-720125)       <model type='virtio'/>
	I1018 12:26:40.384216   54024 main.go:141] libmachine: (kindnet-720125)     </interface>
	I1018 12:26:40.384230   54024 main.go:141] libmachine: (kindnet-720125)     <serial type='pty'>
	I1018 12:26:40.384236   54024 main.go:141] libmachine: (kindnet-720125)       <target port='0'/>
	I1018 12:26:40.384245   54024 main.go:141] libmachine: (kindnet-720125)     </serial>
	I1018 12:26:40.384254   54024 main.go:141] libmachine: (kindnet-720125)     <console type='pty'>
	I1018 12:26:40.384266   54024 main.go:141] libmachine: (kindnet-720125)       <target type='serial' port='0'/>
	I1018 12:26:40.384277   54024 main.go:141] libmachine: (kindnet-720125)     </console>
	I1018 12:26:40.384304   54024 main.go:141] libmachine: (kindnet-720125)     <rng model='virtio'>
	I1018 12:26:40.384323   54024 main.go:141] libmachine: (kindnet-720125)       <backend model='random'>/dev/random</backend>
	I1018 12:26:40.384332   54024 main.go:141] libmachine: (kindnet-720125)     </rng>
	I1018 12:26:40.384340   54024 main.go:141] libmachine: (kindnet-720125)   </devices>
	I1018 12:26:40.384354   54024 main.go:141] libmachine: (kindnet-720125) </domain>
	I1018 12:26:40.384364   54024 main.go:141] libmachine: (kindnet-720125) 
	I1018 12:26:40.388970   54024 main.go:141] libmachine: (kindnet-720125) DBG | domain kindnet-720125 has defined MAC address 52:54:00:3f:a0:78 in network default
	I1018 12:26:40.389652   54024 main.go:141] libmachine: (kindnet-720125) starting domain...
	I1018 12:26:40.389680   54024 main.go:141] libmachine: (kindnet-720125) DBG | domain kindnet-720125 has defined MAC address 52:54:00:0e:b7:f4 in network mk-kindnet-720125
	I1018 12:26:40.389688   54024 main.go:141] libmachine: (kindnet-720125) ensuring networks are active...
	I1018 12:26:40.390420   54024 main.go:141] libmachine: (kindnet-720125) Ensuring network default is active
	I1018 12:26:40.390825   54024 main.go:141] libmachine: (kindnet-720125) Ensuring network mk-kindnet-720125 is active
	I1018 12:26:40.391737   54024 main.go:141] libmachine: (kindnet-720125) getting domain XML...
	I1018 12:26:40.393514   54024 main.go:141] libmachine: (kindnet-720125) DBG | starting domain XML:
	I1018 12:26:40.393530   54024 main.go:141] libmachine: (kindnet-720125) DBG | <domain type='kvm'>
	I1018 12:26:40.393539   54024 main.go:141] libmachine: (kindnet-720125) DBG |   <name>kindnet-720125</name>
	I1018 12:26:40.393548   54024 main.go:141] libmachine: (kindnet-720125) DBG |   <uuid>d3c666c7-5967-40a8-9b36-6cfb4dcc1fb1</uuid>
	I1018 12:26:40.393556   54024 main.go:141] libmachine: (kindnet-720125) DBG |   <memory unit='KiB'>3145728</memory>
	I1018 12:26:40.393564   54024 main.go:141] libmachine: (kindnet-720125) DBG |   <currentMemory unit='KiB'>3145728</currentMemory>
	I1018 12:26:40.393573   54024 main.go:141] libmachine: (kindnet-720125) DBG |   <vcpu placement='static'>2</vcpu>
	I1018 12:26:40.393580   54024 main.go:141] libmachine: (kindnet-720125) DBG |   <os>
	I1018 12:26:40.393593   54024 main.go:141] libmachine: (kindnet-720125) DBG |     <type arch='x86_64' machine='pc-i440fx-jammy'>hvm</type>
	I1018 12:26:40.393629   54024 main.go:141] libmachine: (kindnet-720125) DBG |     <boot dev='cdrom'/>
	I1018 12:26:40.393654   54024 main.go:141] libmachine: (kindnet-720125) DBG |     <boot dev='hd'/>
	I1018 12:26:40.393666   54024 main.go:141] libmachine: (kindnet-720125) DBG |     <bootmenu enable='no'/>
	I1018 12:26:40.393675   54024 main.go:141] libmachine: (kindnet-720125) DBG |   </os>
	I1018 12:26:40.393682   54024 main.go:141] libmachine: (kindnet-720125) DBG |   <features>
	I1018 12:26:40.393690   54024 main.go:141] libmachine: (kindnet-720125) DBG |     <acpi/>
	I1018 12:26:40.393698   54024 main.go:141] libmachine: (kindnet-720125) DBG |     <apic/>
	I1018 12:26:40.393707   54024 main.go:141] libmachine: (kindnet-720125) DBG |     <pae/>
	I1018 12:26:40.393717   54024 main.go:141] libmachine: (kindnet-720125) DBG |   </features>
	I1018 12:26:40.393726   54024 main.go:141] libmachine: (kindnet-720125) DBG |   <cpu mode='host-passthrough' check='none' migratable='on'/>
	I1018 12:26:40.393736   54024 main.go:141] libmachine: (kindnet-720125) DBG |   <clock offset='utc'/>
	I1018 12:26:40.393745   54024 main.go:141] libmachine: (kindnet-720125) DBG |   <on_poweroff>destroy</on_poweroff>
	I1018 12:26:40.393755   54024 main.go:141] libmachine: (kindnet-720125) DBG |   <on_reboot>restart</on_reboot>
	I1018 12:26:40.393764   54024 main.go:141] libmachine: (kindnet-720125) DBG |   <on_crash>destroy</on_crash>
	I1018 12:26:40.393774   54024 main.go:141] libmachine: (kindnet-720125) DBG |   <devices>
	I1018 12:26:40.393805   54024 main.go:141] libmachine: (kindnet-720125) DBG |     <emulator>/usr/bin/qemu-system-x86_64</emulator>
	I1018 12:26:40.393828   54024 main.go:141] libmachine: (kindnet-720125) DBG |     <disk type='file' device='cdrom'>
	I1018 12:26:40.393841   54024 main.go:141] libmachine: (kindnet-720125) DBG |       <driver name='qemu' type='raw'/>
	I1018 12:26:40.393857   54024 main.go:141] libmachine: (kindnet-720125) DBG |       <source file='/home/jenkins/minikube-integration/21647-6010/.minikube/machines/kindnet-720125/boot2docker.iso'/>
	I1018 12:26:40.393871   54024 main.go:141] libmachine: (kindnet-720125) DBG |       <target dev='hdc' bus='scsi'/>
	I1018 12:26:40.393896   54024 main.go:141] libmachine: (kindnet-720125) DBG |       <readonly/>
	I1018 12:26:40.393912   54024 main.go:141] libmachine: (kindnet-720125) DBG |       <address type='drive' controller='0' bus='0' target='0' unit='2'/>
	I1018 12:26:40.393927   54024 main.go:141] libmachine: (kindnet-720125) DBG |     </disk>
	I1018 12:26:40.393940   54024 main.go:141] libmachine: (kindnet-720125) DBG |     <disk type='file' device='disk'>
	I1018 12:26:40.393952   54024 main.go:141] libmachine: (kindnet-720125) DBG |       <driver name='qemu' type='raw' io='threads'/>
	I1018 12:26:40.393965   54024 main.go:141] libmachine: (kindnet-720125) DBG |       <source file='/home/jenkins/minikube-integration/21647-6010/.minikube/machines/kindnet-720125/kindnet-720125.rawdisk'/>
	I1018 12:26:40.393971   54024 main.go:141] libmachine: (kindnet-720125) DBG |       <target dev='hda' bus='virtio'/>
	I1018 12:26:40.393982   54024 main.go:141] libmachine: (kindnet-720125) DBG |       <address type='pci' domain='0x0000' bus='0x00' slot='0x05' function='0x0'/>
	I1018 12:26:40.393987   54024 main.go:141] libmachine: (kindnet-720125) DBG |     </disk>
	I1018 12:26:40.393996   54024 main.go:141] libmachine: (kindnet-720125) DBG |     <controller type='usb' index='0' model='piix3-uhci'>
	I1018 12:26:40.394012   54024 main.go:141] libmachine: (kindnet-720125) DBG |       <address type='pci' domain='0x0000' bus='0x00' slot='0x01' function='0x2'/>
	I1018 12:26:40.394022   54024 main.go:141] libmachine: (kindnet-720125) DBG |     </controller>
	I1018 12:26:40.394034   54024 main.go:141] libmachine: (kindnet-720125) DBG |     <controller type='pci' index='0' model='pci-root'/>
	I1018 12:26:40.394049   54024 main.go:141] libmachine: (kindnet-720125) DBG |     <controller type='scsi' index='0' model='lsilogic'>
	I1018 12:26:40.394062   54024 main.go:141] libmachine: (kindnet-720125) DBG |       <address type='pci' domain='0x0000' bus='0x00' slot='0x04' function='0x0'/>
	I1018 12:26:40.394074   54024 main.go:141] libmachine: (kindnet-720125) DBG |     </controller>
	I1018 12:26:40.394090   54024 main.go:141] libmachine: (kindnet-720125) DBG |     <interface type='network'>
	I1018 12:26:40.394101   54024 main.go:141] libmachine: (kindnet-720125) DBG |       <mac address='52:54:00:0e:b7:f4'/>
	I1018 12:26:40.394112   54024 main.go:141] libmachine: (kindnet-720125) DBG |       <source network='mk-kindnet-720125'/>
	I1018 12:26:40.394129   54024 main.go:141] libmachine: (kindnet-720125) DBG |       <model type='virtio'/>
	I1018 12:26:40.394144   54024 main.go:141] libmachine: (kindnet-720125) DBG |       <address type='pci' domain='0x0000' bus='0x00' slot='0x02' function='0x0'/>
	I1018 12:26:40.394159   54024 main.go:141] libmachine: (kindnet-720125) DBG |     </interface>
	I1018 12:26:40.394175   54024 main.go:141] libmachine: (kindnet-720125) DBG |     <interface type='network'>
	I1018 12:26:40.394193   54024 main.go:141] libmachine: (kindnet-720125) DBG |       <mac address='52:54:00:3f:a0:78'/>
	I1018 12:26:40.394204   54024 main.go:141] libmachine: (kindnet-720125) DBG |       <source network='default'/>
	I1018 12:26:40.394215   54024 main.go:141] libmachine: (kindnet-720125) DBG |       <model type='virtio'/>
	I1018 12:26:40.394226   54024 main.go:141] libmachine: (kindnet-720125) DBG |       <address type='pci' domain='0x0000' bus='0x00' slot='0x03' function='0x0'/>
	I1018 12:26:40.394235   54024 main.go:141] libmachine: (kindnet-720125) DBG |     </interface>
	I1018 12:26:40.394244   54024 main.go:141] libmachine: (kindnet-720125) DBG |     <serial type='pty'>
	I1018 12:26:40.394254   54024 main.go:141] libmachine: (kindnet-720125) DBG |       <target type='isa-serial' port='0'>
	I1018 12:26:40.394281   54024 main.go:141] libmachine: (kindnet-720125) DBG |         <model name='isa-serial'/>
	I1018 12:26:40.394319   54024 main.go:141] libmachine: (kindnet-720125) DBG |       </target>
	I1018 12:26:40.394338   54024 main.go:141] libmachine: (kindnet-720125) DBG |     </serial>
	I1018 12:26:40.394356   54024 main.go:141] libmachine: (kindnet-720125) DBG |     <console type='pty'>
	I1018 12:26:40.394370   54024 main.go:141] libmachine: (kindnet-720125) DBG |       <target type='serial' port='0'/>
	I1018 12:26:40.394380   54024 main.go:141] libmachine: (kindnet-720125) DBG |     </console>
	I1018 12:26:40.394393   54024 main.go:141] libmachine: (kindnet-720125) DBG |     <input type='mouse' bus='ps2'/>
	I1018 12:26:40.394402   54024 main.go:141] libmachine: (kindnet-720125) DBG |     <input type='keyboard' bus='ps2'/>
	I1018 12:26:40.394415   54024 main.go:141] libmachine: (kindnet-720125) DBG |     <audio id='1' type='none'/>
	I1018 12:26:40.394423   54024 main.go:141] libmachine: (kindnet-720125) DBG |     <memballoon model='virtio'>
	I1018 12:26:40.394443   54024 main.go:141] libmachine: (kindnet-720125) DBG |       <address type='pci' domain='0x0000' bus='0x00' slot='0x06' function='0x0'/>
	I1018 12:26:40.394459   54024 main.go:141] libmachine: (kindnet-720125) DBG |     </memballoon>
	I1018 12:26:40.394470   54024 main.go:141] libmachine: (kindnet-720125) DBG |     <rng model='virtio'>
	I1018 12:26:40.394482   54024 main.go:141] libmachine: (kindnet-720125) DBG |       <backend model='random'>/dev/random</backend>
	I1018 12:26:40.394496   54024 main.go:141] libmachine: (kindnet-720125) DBG |       <address type='pci' domain='0x0000' bus='0x00' slot='0x07' function='0x0'/>
	I1018 12:26:40.394505   54024 main.go:141] libmachine: (kindnet-720125) DBG |     </rng>
	I1018 12:26:40.394513   54024 main.go:141] libmachine: (kindnet-720125) DBG |   </devices>
	I1018 12:26:40.394522   54024 main.go:141] libmachine: (kindnet-720125) DBG | </domain>
	I1018 12:26:40.394542   54024 main.go:141] libmachine: (kindnet-720125) DBG | 
	I1018 12:26:41.782659   54024 main.go:141] libmachine: (kindnet-720125) waiting for domain to start...
	I1018 12:26:41.784057   54024 main.go:141] libmachine: (kindnet-720125) domain is now running
	I1018 12:26:41.784080   54024 main.go:141] libmachine: (kindnet-720125) waiting for IP...
	I1018 12:26:41.784831   54024 main.go:141] libmachine: (kindnet-720125) DBG | domain kindnet-720125 has defined MAC address 52:54:00:0e:b7:f4 in network mk-kindnet-720125
	I1018 12:26:41.785431   54024 main.go:141] libmachine: (kindnet-720125) DBG | no network interface addresses found for domain kindnet-720125 (source=lease)
	I1018 12:26:41.785459   54024 main.go:141] libmachine: (kindnet-720125) DBG | trying to list again with source=arp
	I1018 12:26:41.785812   54024 main.go:141] libmachine: (kindnet-720125) DBG | unable to find current IP address of domain kindnet-720125 in network mk-kindnet-720125 (interfaces detected: [])
	I1018 12:26:41.785887   54024 main.go:141] libmachine: (kindnet-720125) DBG | I1018 12:26:41.785810   54053 retry.go:31] will retry after 204.388807ms: waiting for domain to come up
	I1018 12:26:41.992592   54024 main.go:141] libmachine: (kindnet-720125) DBG | domain kindnet-720125 has defined MAC address 52:54:00:0e:b7:f4 in network mk-kindnet-720125
	I1018 12:26:41.993377   54024 main.go:141] libmachine: (kindnet-720125) DBG | no network interface addresses found for domain kindnet-720125 (source=lease)
	I1018 12:26:41.993404   54024 main.go:141] libmachine: (kindnet-720125) DBG | trying to list again with source=arp
	I1018 12:26:41.993817   54024 main.go:141] libmachine: (kindnet-720125) DBG | unable to find current IP address of domain kindnet-720125 in network mk-kindnet-720125 (interfaces detected: [])
	I1018 12:26:41.993887   54024 main.go:141] libmachine: (kindnet-720125) DBG | I1018 12:26:41.993817   54053 retry.go:31] will retry after 374.842513ms: waiting for domain to come up
	I1018 12:26:42.370189   54024 main.go:141] libmachine: (kindnet-720125) DBG | domain kindnet-720125 has defined MAC address 52:54:00:0e:b7:f4 in network mk-kindnet-720125
	I1018 12:26:42.370750   54024 main.go:141] libmachine: (kindnet-720125) DBG | no network interface addresses found for domain kindnet-720125 (source=lease)
	I1018 12:26:42.370778   54024 main.go:141] libmachine: (kindnet-720125) DBG | trying to list again with source=arp
	I1018 12:26:42.371199   54024 main.go:141] libmachine: (kindnet-720125) DBG | unable to find current IP address of domain kindnet-720125 in network mk-kindnet-720125 (interfaces detected: [])
	I1018 12:26:42.371231   54024 main.go:141] libmachine: (kindnet-720125) DBG | I1018 12:26:42.371171   54053 retry.go:31] will retry after 382.206082ms: waiting for domain to come up
	I1018 12:26:42.755732   54024 main.go:141] libmachine: (kindnet-720125) DBG | domain kindnet-720125 has defined MAC address 52:54:00:0e:b7:f4 in network mk-kindnet-720125
	I1018 12:26:42.756456   54024 main.go:141] libmachine: (kindnet-720125) DBG | no network interface addresses found for domain kindnet-720125 (source=lease)
	I1018 12:26:42.756481   54024 main.go:141] libmachine: (kindnet-720125) DBG | trying to list again with source=arp
	I1018 12:26:42.756848   54024 main.go:141] libmachine: (kindnet-720125) DBG | unable to find current IP address of domain kindnet-720125 in network mk-kindnet-720125 (interfaces detected: [])
	I1018 12:26:42.756877   54024 main.go:141] libmachine: (kindnet-720125) DBG | I1018 12:26:42.756832   54053 retry.go:31] will retry after 434.513358ms: waiting for domain to come up
	I1018 12:26:43.192495   54024 main.go:141] libmachine: (kindnet-720125) DBG | domain kindnet-720125 has defined MAC address 52:54:00:0e:b7:f4 in network mk-kindnet-720125
	I1018 12:26:43.193112   54024 main.go:141] libmachine: (kindnet-720125) DBG | no network interface addresses found for domain kindnet-720125 (source=lease)
	I1018 12:26:43.193137   54024 main.go:141] libmachine: (kindnet-720125) DBG | trying to list again with source=arp
	I1018 12:26:43.193557   54024 main.go:141] libmachine: (kindnet-720125) DBG | unable to find current IP address of domain kindnet-720125 in network mk-kindnet-720125 (interfaces detected: [])
	I1018 12:26:43.193584   54024 main.go:141] libmachine: (kindnet-720125) DBG | I1018 12:26:43.193492   54053 retry.go:31] will retry after 622.396959ms: waiting for domain to come up
	I1018 12:26:43.818233   54024 main.go:141] libmachine: (kindnet-720125) DBG | domain kindnet-720125 has defined MAC address 52:54:00:0e:b7:f4 in network mk-kindnet-720125
	I1018 12:26:43.819067   54024 main.go:141] libmachine: (kindnet-720125) DBG | no network interface addresses found for domain kindnet-720125 (source=lease)
	I1018 12:26:43.819104   54024 main.go:141] libmachine: (kindnet-720125) DBG | trying to list again with source=arp
	I1018 12:26:43.819584   54024 main.go:141] libmachine: (kindnet-720125) DBG | unable to find current IP address of domain kindnet-720125 in network mk-kindnet-720125 (interfaces detected: [])
	I1018 12:26:43.819616   54024 main.go:141] libmachine: (kindnet-720125) DBG | I1018 12:26:43.819536   54053 retry.go:31] will retry after 815.894877ms: waiting for domain to come up
	I1018 12:26:44.636575   54024 main.go:141] libmachine: (kindnet-720125) DBG | domain kindnet-720125 has defined MAC address 52:54:00:0e:b7:f4 in network mk-kindnet-720125
	I1018 12:26:44.637323   54024 main.go:141] libmachine: (kindnet-720125) DBG | no network interface addresses found for domain kindnet-720125 (source=lease)
	I1018 12:26:44.637353   54024 main.go:141] libmachine: (kindnet-720125) DBG | trying to list again with source=arp
	I1018 12:26:44.637721   54024 main.go:141] libmachine: (kindnet-720125) DBG | unable to find current IP address of domain kindnet-720125 in network mk-kindnet-720125 (interfaces detected: [])
	I1018 12:26:44.637759   54024 main.go:141] libmachine: (kindnet-720125) DBG | I1018 12:26:44.637705   54053 retry.go:31] will retry after 1.067259778s: waiting for domain to come up
	I1018 12:26:43.775588   52813 out.go:252]   - Booting up control plane ...
	I1018 12:26:43.775698   52813 kubeadm.go:318] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I1018 12:26:43.775800   52813 kubeadm.go:318] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I1018 12:26:43.777341   52813 kubeadm.go:318] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I1018 12:26:43.800502   52813 kubeadm.go:318] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I1018 12:26:43.800688   52813 kubeadm.go:318] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	I1018 12:26:43.808677   52813 kubeadm.go:318] [patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	I1018 12:26:43.808867   52813 kubeadm.go:318] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I1018 12:26:43.809016   52813 kubeadm.go:318] [kubelet-start] Starting the kubelet
	I1018 12:26:43.996155   52813 kubeadm.go:318] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	I1018 12:26:43.996352   52813 kubeadm.go:318] [kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	I1018 12:26:44.997230   52813 kubeadm.go:318] [kubelet-check] The kubelet is healthy after 1.001669295s
	I1018 12:26:45.000531   52813 kubeadm.go:318] [control-plane-check] Waiting for healthy control plane components. This can take up to 4m0s
	I1018 12:26:45.000667   52813 kubeadm.go:318] [control-plane-check] Checking kube-apiserver at https://192.168.72.13:8443/livez
	I1018 12:26:45.000814   52813 kubeadm.go:318] [control-plane-check] Checking kube-controller-manager at https://127.0.0.1:10257/healthz
	I1018 12:26:45.000947   52813 kubeadm.go:318] [control-plane-check] Checking kube-scheduler at https://127.0.0.1:10259/livez
	I1018 12:26:44.439803   52283 api_server.go:253] Checking apiserver healthz at https://192.168.39.140:8443/healthz ...
	I1018 12:26:44.440530   52283 api_server.go:269] stopped: https://192.168.39.140:8443/healthz: Get "https://192.168.39.140:8443/healthz": dial tcp 192.168.39.140:8443: connect: connection refused
	I1018 12:26:44.940153   52283 api_server.go:253] Checking apiserver healthz at https://192.168.39.140:8443/healthz ...
	I1018 12:26:44.940832   52283 api_server.go:269] stopped: https://192.168.39.140:8443/healthz: Get "https://192.168.39.140:8443/healthz": dial tcp 192.168.39.140:8443: connect: connection refused
	I1018 12:26:45.439761   52283 api_server.go:253] Checking apiserver healthz at https://192.168.39.140:8443/healthz ...
	I1018 12:26:45.440519   52283 api_server.go:269] stopped: https://192.168.39.140:8443/healthz: Get "https://192.168.39.140:8443/healthz": dial tcp 192.168.39.140:8443: connect: connection refused
	I1018 12:26:45.940122   52283 api_server.go:253] Checking apiserver healthz at https://192.168.39.140:8443/healthz ...
	I1018 12:26:45.940844   52283 api_server.go:269] stopped: https://192.168.39.140:8443/healthz: Get "https://192.168.39.140:8443/healthz": dial tcp 192.168.39.140:8443: connect: connection refused
	I1018 12:26:46.439543   52283 api_server.go:253] Checking apiserver healthz at https://192.168.39.140:8443/healthz ...
	I1018 12:26:46.440225   52283 api_server.go:269] stopped: https://192.168.39.140:8443/healthz: Get "https://192.168.39.140:8443/healthz": dial tcp 192.168.39.140:8443: connect: connection refused
	I1018 12:26:46.939926   52283 api_server.go:253] Checking apiserver healthz at https://192.168.39.140:8443/healthz ...
	I1018 12:26:46.940690   52283 api_server.go:269] stopped: https://192.168.39.140:8443/healthz: Get "https://192.168.39.140:8443/healthz": dial tcp 192.168.39.140:8443: connect: connection refused
	I1018 12:26:47.440072   52283 api_server.go:253] Checking apiserver healthz at https://192.168.39.140:8443/healthz ...
	I1018 12:26:47.440765   52283 api_server.go:269] stopped: https://192.168.39.140:8443/healthz: Get "https://192.168.39.140:8443/healthz": dial tcp 192.168.39.140:8443: connect: connection refused
	I1018 12:26:47.940122   52283 api_server.go:253] Checking apiserver healthz at https://192.168.39.140:8443/healthz ...
	I1018 12:26:47.940902   52283 api_server.go:269] stopped: https://192.168.39.140:8443/healthz: Get "https://192.168.39.140:8443/healthz": dial tcp 192.168.39.140:8443: connect: connection refused
	I1018 12:26:48.440476   52283 api_server.go:253] Checking apiserver healthz at https://192.168.39.140:8443/healthz ...
	I1018 12:26:48.441175   52283 api_server.go:269] stopped: https://192.168.39.140:8443/healthz: Get "https://192.168.39.140:8443/healthz": dial tcp 192.168.39.140:8443: connect: connection refused
	I1018 12:26:48.940453   52283 api_server.go:253] Checking apiserver healthz at https://192.168.39.140:8443/healthz ...
	I1018 12:26:48.941104   52283 api_server.go:269] stopped: https://192.168.39.140:8443/healthz: Get "https://192.168.39.140:8443/healthz": dial tcp 192.168.39.140:8443: connect: connection refused
	I1018 12:26:45.706998   54024 main.go:141] libmachine: (kindnet-720125) DBG | domain kindnet-720125 has defined MAC address 52:54:00:0e:b7:f4 in network mk-kindnet-720125
	I1018 12:26:45.707808   54024 main.go:141] libmachine: (kindnet-720125) DBG | no network interface addresses found for domain kindnet-720125 (source=lease)
	I1018 12:26:45.707838   54024 main.go:141] libmachine: (kindnet-720125) DBG | trying to list again with source=arp
	I1018 12:26:45.708201   54024 main.go:141] libmachine: (kindnet-720125) DBG | unable to find current IP address of domain kindnet-720125 in network mk-kindnet-720125 (interfaces detected: [])
	I1018 12:26:45.708263   54024 main.go:141] libmachine: (kindnet-720125) DBG | I1018 12:26:45.708195   54053 retry.go:31] will retry after 1.310839951s: waiting for domain to come up
	I1018 12:26:47.020928   54024 main.go:141] libmachine: (kindnet-720125) DBG | domain kindnet-720125 has defined MAC address 52:54:00:0e:b7:f4 in network mk-kindnet-720125
	I1018 12:26:47.021787   54024 main.go:141] libmachine: (kindnet-720125) DBG | no network interface addresses found for domain kindnet-720125 (source=lease)
	I1018 12:26:47.021817   54024 main.go:141] libmachine: (kindnet-720125) DBG | trying to list again with source=arp
	I1018 12:26:47.022144   54024 main.go:141] libmachine: (kindnet-720125) DBG | unable to find current IP address of domain kindnet-720125 in network mk-kindnet-720125 (interfaces detected: [])
	I1018 12:26:47.022169   54024 main.go:141] libmachine: (kindnet-720125) DBG | I1018 12:26:47.022128   54053 retry.go:31] will retry after 1.184917747s: waiting for domain to come up
	I1018 12:26:48.208893   54024 main.go:141] libmachine: (kindnet-720125) DBG | domain kindnet-720125 has defined MAC address 52:54:00:0e:b7:f4 in network mk-kindnet-720125
	I1018 12:26:48.210115   54024 main.go:141] libmachine: (kindnet-720125) DBG | no network interface addresses found for domain kindnet-720125 (source=lease)
	I1018 12:26:48.210353   54024 main.go:141] libmachine: (kindnet-720125) DBG | trying to list again with source=arp
	I1018 12:26:48.210378   54024 main.go:141] libmachine: (kindnet-720125) DBG | unable to find current IP address of domain kindnet-720125 in network mk-kindnet-720125 (interfaces detected: [])
	I1018 12:26:48.210400   54024 main.go:141] libmachine: (kindnet-720125) DBG | I1018 12:26:48.210282   54053 retry.go:31] will retry after 2.142899269s: waiting for domain to come up
	I1018 12:26:47.544998   52813 kubeadm.go:318] [control-plane-check] kube-controller-manager is healthy after 2.544969296s
	I1018 12:26:49.216065   52813 kubeadm.go:318] [control-plane-check] kube-scheduler is healthy after 4.216981491s
	I1018 12:26:52.002383   52813 kubeadm.go:318] [control-plane-check] kube-apiserver is healthy after 7.003405486s
	I1018 12:26:52.027872   52813 kubeadm.go:318] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I1018 12:26:52.051441   52813 kubeadm.go:318] [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I1018 12:26:52.081495   52813 kubeadm.go:318] [upload-certs] Skipping phase. Please see --upload-certs
	I1018 12:26:52.081766   52813 kubeadm.go:318] [mark-control-plane] Marking the node auto-720125 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I1018 12:26:52.106887   52813 kubeadm.go:318] [bootstrap-token] Using token: j4uyf3.sh7e2l27mgyytkmc
	
	
	==> Docker <==
	Oct 18 12:25:54 default-k8s-diff-port-948988 dockerd[1170]: time="2025-10-18T12:25:54.120729117Z" level=warning msg="reference for unknown type: application/vnd.docker.distribution.manifest.v1+prettyjws" digest="sha256:5d99aa1120524c801bc8c1a7077e8f5ec122ba16b6dda1a5d3826057f67b9bcb" remote="registry.k8s.io/echoserver:1.4"
	Oct 18 12:25:54 default-k8s-diff-port-948988 dockerd[1170]: time="2025-10-18T12:25:54.212112555Z" level=warning msg="Error persisting manifest" digest="sha256:5d99aa1120524c801bc8c1a7077e8f5ec122ba16b6dda1a5d3826057f67b9bcb" error="error committing manifest to content store: commit failed: unexpected commit digest sha256:eaee4c452b076cdb05b391ed7e75e1ad0aca136665875ab5d7e2f3d9f4675769, expected sha256:5d99aa1120524c801bc8c1a7077e8f5ec122ba16b6dda1a5d3826057f67b9bcb: failed precondition" remote="registry.k8s.io/echoserver:1.4"
	Oct 18 12:25:54 default-k8s-diff-port-948988 dockerd[1170]: time="2025-10-18T12:25:54.212342190Z" level=info msg="Attempting next endpoint for pull after error: Docker Image Format v1 and Docker Image manifest version 2, schema 1 support has been removed. Suggest the author of registry.k8s.io/echoserver:1.4 to upgrade the image to the OCI Format or Docker Image manifest v2, schema 2. More information at https://docs.docker.com/go/deprecated-image-specs/"
	Oct 18 12:25:54 default-k8s-diff-port-948988 cri-dockerd[1540]: time="2025-10-18T12:25:54Z" level=info msg="Stop pulling image registry.k8s.io/echoserver:1.4: 1.4: Pulling from echoserver"
	Oct 18 12:25:54 default-k8s-diff-port-948988 dockerd[1170]: time="2025-10-18T12:25:54.421865126Z" level=warning msg="reference for unknown type: " digest="sha256:2e500d29e9d5f4a086b908eb8dfe7ecac57d2ab09d65b24f588b1d449841ef93" remote="docker.io/kubernetesui/dashboard@sha256:2e500d29e9d5f4a086b908eb8dfe7ecac57d2ab09d65b24f588b1d449841ef93"
	Oct 18 12:26:02 default-k8s-diff-port-948988 cri-dockerd[1540]: time="2025-10-18T12:26:02Z" level=info msg="Stop pulling image docker.io/kubernetesui/dashboard:v2.7.0@sha256:2e500d29e9d5f4a086b908eb8dfe7ecac57d2ab09d65b24f588b1d449841ef93: Status: Downloaded newer image for kubernetesui/dashboard@sha256:2e500d29e9d5f4a086b908eb8dfe7ecac57d2ab09d65b24f588b1d449841ef93"
	Oct 18 12:26:06 default-k8s-diff-port-948988 dockerd[1170]: time="2025-10-18T12:26:06.830994794Z" level=warning msg="reference for unknown type: application/vnd.docker.distribution.manifest.v1+prettyjws" digest="sha256:5d99aa1120524c801bc8c1a7077e8f5ec122ba16b6dda1a5d3826057f67b9bcb" remote="registry.k8s.io/echoserver:1.4"
	Oct 18 12:26:06 default-k8s-diff-port-948988 dockerd[1170]: time="2025-10-18T12:26:06.904996286Z" level=warning msg="Error persisting manifest" digest="sha256:5d99aa1120524c801bc8c1a7077e8f5ec122ba16b6dda1a5d3826057f67b9bcb" error="error committing manifest to content store: commit failed: unexpected commit digest sha256:eaee4c452b076cdb05b391ed7e75e1ad0aca136665875ab5d7e2f3d9f4675769, expected sha256:5d99aa1120524c801bc8c1a7077e8f5ec122ba16b6dda1a5d3826057f67b9bcb: failed precondition" remote="registry.k8s.io/echoserver:1.4"
	Oct 18 12:26:06 default-k8s-diff-port-948988 dockerd[1170]: time="2025-10-18T12:26:06.905088942Z" level=info msg="Attempting next endpoint for pull after error: Docker Image Format v1 and Docker Image manifest version 2, schema 1 support has been removed. Suggest the author of registry.k8s.io/echoserver:1.4 to upgrade the image to the OCI Format or Docker Image manifest v2, schema 2. More information at https://docs.docker.com/go/deprecated-image-specs/"
	Oct 18 12:26:06 default-k8s-diff-port-948988 cri-dockerd[1540]: time="2025-10-18T12:26:06Z" level=info msg="Stop pulling image registry.k8s.io/echoserver:1.4: 1.4: Pulling from echoserver"
	Oct 18 12:26:06 default-k8s-diff-port-948988 dockerd[1170]: time="2025-10-18T12:26:06.919653355Z" level=warning msg="Error getting v2 registry: Get \"https://fake.domain/v2/\": dial tcp: lookup fake.domain on 192.168.122.1:53: no such host"
	Oct 18 12:26:06 default-k8s-diff-port-948988 dockerd[1170]: time="2025-10-18T12:26:06.919692389Z" level=info msg="Attempting next endpoint for pull after error: Get \"https://fake.domain/v2/\": dial tcp: lookup fake.domain on 192.168.122.1:53: no such host"
	Oct 18 12:26:06 default-k8s-diff-port-948988 dockerd[1170]: time="2025-10-18T12:26:06.923070969Z" level=error msg="unexpected HTTP error handling" error="<nil>"
	Oct 18 12:26:06 default-k8s-diff-port-948988 dockerd[1170]: time="2025-10-18T12:26:06.924597650Z" level=error msg="Handler for POST /v1.46/images/create returned error: Get \"https://fake.domain/v2/\": dial tcp: lookup fake.domain on 192.168.122.1:53: no such host"
	Oct 18 12:26:14 default-k8s-diff-port-948988 dockerd[1170]: time="2025-10-18T12:26:14.766195371Z" level=info msg="ignoring event" container=28ffefdfcaefaa0dcc5a6077bf470cdb9475d6e21b7a7d96be86de74a8777734 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Oct 18 12:26:48 default-k8s-diff-port-948988 cri-dockerd[1540]: time="2025-10-18T12:26:48Z" level=info msg="Failed to read pod IP from plugin/docker: networkPlugin cni failed on the status hook for pod \"coredns-66bc5c9577-jc7tz_kube-system\": CNI failed to retrieve network namespace path: cannot find network namespace for the terminated container \"50ccc6bf5c1dc8dbc44839aac4aaf80b91e88cfa36a35e71c99ecbc99a5d2efb\""
	Oct 18 12:26:48 default-k8s-diff-port-948988 cri-dockerd[1540]: time="2025-10-18T12:26:48Z" level=info msg="Docker cri received runtime config &RuntimeConfig{NetworkConfig:&NetworkConfig{PodCidr:10.244.0.0/24,},}"
	Oct 18 12:26:49 default-k8s-diff-port-948988 dockerd[1170]: time="2025-10-18T12:26:49.579823134Z" level=warning msg="Error getting v2 registry: Get \"https://fake.domain/v2/\": dial tcp: lookup fake.domain on 192.168.122.1:53: no such host"
	Oct 18 12:26:49 default-k8s-diff-port-948988 dockerd[1170]: time="2025-10-18T12:26:49.579851904Z" level=info msg="Attempting next endpoint for pull after error: Get \"https://fake.domain/v2/\": dial tcp: lookup fake.domain on 192.168.122.1:53: no such host"
	Oct 18 12:26:49 default-k8s-diff-port-948988 dockerd[1170]: time="2025-10-18T12:26:49.584080633Z" level=error msg="unexpected HTTP error handling" error="<nil>"
	Oct 18 12:26:49 default-k8s-diff-port-948988 dockerd[1170]: time="2025-10-18T12:26:49.584132115Z" level=error msg="Handler for POST /v1.46/images/create returned error: Get \"https://fake.domain/v2/\": dial tcp: lookup fake.domain on 192.168.122.1:53: no such host"
	Oct 18 12:26:49 default-k8s-diff-port-948988 dockerd[1170]: time="2025-10-18T12:26:49.670933568Z" level=warning msg="reference for unknown type: application/vnd.docker.distribution.manifest.v1+prettyjws" digest="sha256:5d99aa1120524c801bc8c1a7077e8f5ec122ba16b6dda1a5d3826057f67b9bcb" remote="registry.k8s.io/echoserver:1.4"
	Oct 18 12:26:50 default-k8s-diff-port-948988 dockerd[1170]: time="2025-10-18T12:26:50.571698862Z" level=warning msg="Error persisting manifest" digest="sha256:5d99aa1120524c801bc8c1a7077e8f5ec122ba16b6dda1a5d3826057f67b9bcb" error="error committing manifest to content store: commit failed: unexpected commit digest sha256:eaee4c452b076cdb05b391ed7e75e1ad0aca136665875ab5d7e2f3d9f4675769, expected sha256:5d99aa1120524c801bc8c1a7077e8f5ec122ba16b6dda1a5d3826057f67b9bcb: failed precondition" remote="registry.k8s.io/echoserver:1.4"
	Oct 18 12:26:50 default-k8s-diff-port-948988 dockerd[1170]: time="2025-10-18T12:26:50.571843908Z" level=info msg="Attempting next endpoint for pull after error: Docker Image Format v1 and Docker Image manifest version 2, schema 1 support has been removed. Suggest the author of registry.k8s.io/echoserver:1.4 to upgrade the image to the OCI Format or Docker Image manifest v2, schema 2. More information at https://docs.docker.com/go/deprecated-image-specs/"
	Oct 18 12:26:50 default-k8s-diff-port-948988 cri-dockerd[1540]: time="2025-10-18T12:26:50Z" level=info msg="Stop pulling image registry.k8s.io/echoserver:1.4: 1.4: Pulling from echoserver"
	
	
	==> container status <==
	CONTAINER           IMAGE                                                                                                 CREATED              STATE               NAME                      ATTEMPT             POD ID              POD
	bffc616999573       6e38f40d628db                                                                                         4 seconds ago        Running             storage-provisioner       2                   002d263a57e06       storage-provisioner
	3a2c1a468e77b       kubernetesui/dashboard@sha256:2e500d29e9d5f4a086b908eb8dfe7ecac57d2ab09d65b24f588b1d449841ef93        51 seconds ago       Running             kubernetes-dashboard      0                   22320121e1a75       kubernetes-dashboard-855c9754f9-8frzf
	14a606bd02ea2       52546a367cc9e                                                                                         About a minute ago   Running             coredns                   1                   2bf7782642e47       coredns-66bc5c9577-s7znr
	3181063a95749       56cc512116c8f                                                                                         About a minute ago   Running             busybox                   1                   f01a1904eab6f       busybox
	28ffefdfcaefa       6e38f40d628db                                                                                         About a minute ago   Exited              storage-provisioner       1                   002d263a57e06       storage-provisioner
	e74b601e6b20b       fc25172553d79                                                                                         About a minute ago   Running             kube-proxy                1                   5916362f7151c       kube-proxy-hmf6q
	aa45133c5292e       7dd6aaa1717ab                                                                                         About a minute ago   Running             kube-scheduler            1                   c386eff006256       kube-scheduler-default-k8s-diff-port-948988
	0d33563cfd415       5f1f5298c888d                                                                                         About a minute ago   Running             etcd                      1                   aa5a738a016e1       etcd-default-k8s-diff-port-948988
	482f645840fbd       c3994bc696102                                                                                         About a minute ago   Running             kube-apiserver            1                   6d80f3bf62181       kube-apiserver-default-k8s-diff-port-948988
	cbcb65b91df5f       c80c8dbafe7dd                                                                                         About a minute ago   Running             kube-controller-manager   1                   9b74e777c1d81       kube-controller-manager-default-k8s-diff-port-948988
	06b0d6a0fe73a       gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e   2 minutes ago        Exited              busybox                   0                   02768f34f11ea       busybox
	bf61d222c7e61       52546a367cc9e                                                                                         2 minutes ago        Exited              coredns                   0                   4a9e23fe5352b       coredns-66bc5c9577-s7znr
	72d0dd1b3e6d1       fc25172553d79                                                                                         2 minutes ago        Exited              kube-proxy                0                   3b1b31ff39772       kube-proxy-hmf6q
	ac171ed99aa7b       7dd6aaa1717ab                                                                                         2 minutes ago        Exited              kube-scheduler            0                   27f94a06346ec       kube-scheduler-default-k8s-diff-port-948988
	07dc691cd2b41       c80c8dbafe7dd                                                                                         2 minutes ago        Exited              kube-controller-manager   0                   7c2c9ab301ac9       kube-controller-manager-default-k8s-diff-port-948988
	5a3d271b1a7a4       5f1f5298c888d                                                                                         2 minutes ago        Exited              etcd                      0                   7776a7d62b3b1       etcd-default-k8s-diff-port-948988
	5dfc625534d2e       c3994bc696102                                                                                         2 minutes ago        Exited              kube-apiserver            0                   20ac876b72a06       kube-apiserver-default-k8s-diff-port-948988
	
	
	==> coredns [14a606bd02ea] <==
	maxprocs: Leaving GOMAXPROCS=2: CPU quota undefined
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = 8b8641eae0af5337389aa76a78f71d2e2a7bd54cc199277be5abe199aebbfd3c9e156259680c91eb397a4c282437fd35af249d42857043b32bf3beb690ad2f54
	CoreDNS-1.12.1
	linux/amd64, go1.24.1, 707c7c1
	[INFO] 127.0.0.1:47328 - 15007 "HINFO IN 5766678739025722613.5866360335637854453. udp 57 false 512" NXDOMAIN qr,rd,ra 132 0.103273346s
	
	
	==> coredns [bf61d222c7e6] <==
	maxprocs: Leaving GOMAXPROCS=2: CPU quota undefined
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = 8b8641eae0af5337389aa76a78f71d2e2a7bd54cc199277be5abe199aebbfd3c9e156259680c91eb397a4c282437fd35af249d42857043b32bf3beb690ad2f54
	CoreDNS-1.12.1
	linux/amd64, go1.24.1, 707c7c1
	[INFO] 127.0.0.1:48576 - 64076 "HINFO IN 6932009071857870960.7176900972779109838. udp 57 false 512" NXDOMAIN qr,rd,ra 132 0.13763s
	[INFO] SIGTERM: Shutting down servers then terminating
	[INFO] plugin/health: Going into lameduck mode for 5s
	
	
	==> describe nodes <==
	Name:               default-k8s-diff-port-948988
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=default-k8s-diff-port-948988
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=6a5d4c9cccb1ce5842ff2f1e7c0db9c10e4246ee
	                    minikube.k8s.io/name=default-k8s-diff-port-948988
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2025_10_18T12_24_33_0700
	                    minikube.k8s.io/version=v1.37.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Sat, 18 Oct 2025 12:24:29 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  default-k8s-diff-port-948988
	  AcquireTime:     <unset>
	  RenewTime:       Sat, 18 Oct 2025 12:26:48 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Sat, 18 Oct 2025 12:26:48 +0000   Sat, 18 Oct 2025 12:24:26 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Sat, 18 Oct 2025 12:26:48 +0000   Sat, 18 Oct 2025 12:24:26 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Sat, 18 Oct 2025 12:26:48 +0000   Sat, 18 Oct 2025 12:24:26 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Sat, 18 Oct 2025 12:26:48 +0000   Sat, 18 Oct 2025 12:25:53 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.61.154
	  Hostname:    default-k8s-diff-port-948988
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             3042712Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             3042712Ki
	  pods:               110
	System Info:
	  Machine ID:                 d7b095482f0f4bd294376564492aae84
	  System UUID:                d7b09548-2f0f-4bd2-9437-6564492aae84
	  Boot ID:                    5dbb338e-d666-4176-8009-ddf389982046
	  Kernel Version:             6.6.95
	  OS Image:                   Buildroot 2025.02
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  docker://28.5.1
	  Kubelet Version:            v1.34.1
	  Kube-Proxy Version:         
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (11 in total)
	  Namespace                   Name                                                    CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                                    ------------  ----------  ---------------  -------------  ---
	  default                     busybox                                                 0 (0%)        0 (0%)      0 (0%)           0 (0%)         2m5s
	  kube-system                 coredns-66bc5c9577-s7znr                                100m (5%)     0 (0%)      70Mi (2%)        170Mi (5%)     2m13s
	  kube-system                 etcd-default-k8s-diff-port-948988                       100m (5%)     0 (0%)      100Mi (3%)       0 (0%)         2m21s
	  kube-system                 kube-apiserver-default-k8s-diff-port-948988             250m (12%)    0 (0%)      0 (0%)           0 (0%)         2m21s
	  kube-system                 kube-controller-manager-default-k8s-diff-port-948988    200m (10%)    0 (0%)      0 (0%)           0 (0%)         2m21s
	  kube-system                 kube-proxy-hmf6q                                        0 (0%)        0 (0%)      0 (0%)           0 (0%)         2m15s
	  kube-system                 kube-scheduler-default-k8s-diff-port-948988             100m (5%)     0 (0%)      0 (0%)           0 (0%)         2m22s
	  kube-system                 metrics-server-746fcd58dc-7788d                         100m (5%)     0 (0%)      200Mi (6%)       0 (0%)         114s
	  kube-system                 storage-provisioner                                     0 (0%)        0 (0%)      0 (0%)           0 (0%)         2m13s
	  kubernetes-dashboard        dashboard-metrics-scraper-6ffb444bf9-gxs6s              0 (0%)        0 (0%)      0 (0%)           0 (0%)         66s
	  kubernetes-dashboard        kubernetes-dashboard-855c9754f9-8frzf                   0 (0%)        0 (0%)      0 (0%)           0 (0%)         66s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests     Limits
	  --------           --------     ------
	  cpu                850m (42%)   0 (0%)
	  memory             370Mi (12%)  170Mi (5%)
	  ephemeral-storage  0 (0%)       0 (0%)
	  hugepages-2Mi      0 (0%)       0 (0%)
	Events:
	  Type     Reason                   Age                    From             Message
	  ----     ------                   ----                   ----             -------
	  Normal   Starting                 2m11s                  kube-proxy       
	  Normal   Starting                 66s                    kube-proxy       
	  Normal   Starting                 2m29s                  kubelet          Starting kubelet.
	  Normal   NodeHasSufficientMemory  2m28s (x8 over 2m28s)  kubelet          Node default-k8s-diff-port-948988 status is now: NodeHasSufficientMemory
	  Normal   NodeHasNoDiskPressure    2m28s (x8 over 2m28s)  kubelet          Node default-k8s-diff-port-948988 status is now: NodeHasNoDiskPressure
	  Normal   NodeHasSufficientPID     2m28s (x7 over 2m28s)  kubelet          Node default-k8s-diff-port-948988 status is now: NodeHasSufficientPID
	  Normal   NodeAllocatableEnforced  2m28s                  kubelet          Updated Node Allocatable limit across pods
	  Normal   Starting                 2m21s                  kubelet          Starting kubelet.
	  Normal   NodeAllocatableEnforced  2m21s                  kubelet          Updated Node Allocatable limit across pods
	  Normal   NodeHasSufficientMemory  2m21s                  kubelet          Node default-k8s-diff-port-948988 status is now: NodeHasSufficientMemory
	  Normal   NodeHasNoDiskPressure    2m21s                  kubelet          Node default-k8s-diff-port-948988 status is now: NodeHasNoDiskPressure
	  Normal   NodeHasSufficientPID     2m21s                  kubelet          Node default-k8s-diff-port-948988 status is now: NodeHasSufficientPID
	  Normal   NodeReady                2m17s                  kubelet          Node default-k8s-diff-port-948988 status is now: NodeReady
	  Normal   RegisteredNode           2m16s                  node-controller  Node default-k8s-diff-port-948988 event: Registered Node default-k8s-diff-port-948988 in Controller
	  Normal   Starting                 75s                    kubelet          Starting kubelet.
	  Normal   NodeHasSufficientMemory  75s (x8 over 75s)      kubelet          Node default-k8s-diff-port-948988 status is now: NodeHasSufficientMemory
	  Normal   NodeHasNoDiskPressure    75s (x8 over 75s)      kubelet          Node default-k8s-diff-port-948988 status is now: NodeHasNoDiskPressure
	  Normal   NodeHasSufficientPID     75s (x7 over 75s)      kubelet          Node default-k8s-diff-port-948988 status is now: NodeHasSufficientPID
	  Normal   NodeAllocatableEnforced  75s                    kubelet          Updated Node Allocatable limit across pods
	  Warning  Rebooted                 71s                    kubelet          Node default-k8s-diff-port-948988 has been rebooted, boot id: 5dbb338e-d666-4176-8009-ddf389982046
	  Normal   RegisteredNode           67s                    node-controller  Node default-k8s-diff-port-948988 event: Registered Node default-k8s-diff-port-948988 in Controller
	  Normal   Starting                 5s                     kubelet          Starting kubelet.
	  Normal   NodeAllocatableEnforced  5s                     kubelet          Updated Node Allocatable limit across pods
	  Normal   NodeHasSufficientMemory  5s                     kubelet          Node default-k8s-diff-port-948988 status is now: NodeHasSufficientMemory
	  Normal   NodeHasNoDiskPressure    5s                     kubelet          Node default-k8s-diff-port-948988 status is now: NodeHasNoDiskPressure
	  Normal   NodeHasSufficientPID     5s                     kubelet          Node default-k8s-diff-port-948988 status is now: NodeHasSufficientPID
	
	
	==> dmesg <==
	[Oct18 12:25] Booted with the nomodeset parameter. Only the system framebuffer will be available
	[  +0.000011] acpi PNP0A03:00: fail to add MMCONFIG information, can't access extended configuration space under this bridge
	[  +0.001590] platform regulatory.0: Direct firmware load for regulatory.db failed with error -2
	[  +0.004075] (rpcbind)[119]: rpcbind.service: Referenced but unset environment variable evaluates to an empty string: RPCBIND_OPTIONS
	[  +0.931702] NFSD: Using /var/lib/nfs/v4recovery as the NFSv4 state recovery directory
	[  +0.000018] NFSD: unable to find recovery directory /var/lib/nfs/v4recovery
	[  +0.000004] NFSD: Unable to initialize client recovery tracking! (-2)
	[  +0.130272] kauditd_printk_skb: 1 callbacks suppressed
	[  +0.102368] kauditd_printk_skb: 449 callbacks suppressed
	[  +5.669077] kauditd_printk_skb: 165 callbacks suppressed
	[  +5.952206] kauditd_printk_skb: 134 callbacks suppressed
	[  +2.969146] kauditd_printk_skb: 264 callbacks suppressed
	[Oct18 12:26] kauditd_printk_skb: 11 callbacks suppressed
	[  +0.224441] kauditd_printk_skb: 35 callbacks suppressed
	
	
	==> etcd [0d33563cfd41] <==
	{"level":"info","ts":"2025-10-18T12:26:50.186827Z","caller":"traceutil/trace.go:172","msg":"trace[1372174769] range","detail":"{range_begin:/registry/health; range_end:; response_count:0; response_revision:752; }","duration":"399.841982ms","start":"2025-10-18T12:26:49.786974Z","end":"2025-10-18T12:26:50.186816Z","steps":["trace[1372174769] 'range keys from in-memory index tree'  (duration: 399.699339ms)"],"step_count":1}
	{"level":"warn","ts":"2025-10-18T12:26:50.186874Z","caller":"v3rpc/interceptor.go:202","msg":"request stats","start time":"2025-10-18T12:26:49.786955Z","time spent":"399.895498ms","remote":"127.0.0.1:58530","response type":"/etcdserverpb.KV/Range","request count":0,"request size":18,"response count":0,"response size":27,"request content":"key:\"/registry/health\" "}
	{"level":"info","ts":"2025-10-18T12:26:50.333810Z","caller":"traceutil/trace.go:172","msg":"trace[111824645] linearizableReadLoop","detail":"{readStateIndex:805; appliedIndex:805; }","duration":"469.70081ms","start":"2025-10-18T12:26:49.864083Z","end":"2025-10-18T12:26:50.333784Z","steps":["trace[111824645] 'read index received'  (duration: 469.662848ms)","trace[111824645] 'applied index is now lower than readState.Index'  (duration: 36.562µs)"],"step_count":2}
	{"level":"warn","ts":"2025-10-18T12:26:50.333966Z","caller":"txn/util.go:93","msg":"apply request took too long","took":"469.888536ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/flowschemas\" limit:1 ","response":"range_response_count:0 size:5"}
	{"level":"info","ts":"2025-10-18T12:26:50.334000Z","caller":"traceutil/trace.go:172","msg":"trace[512175939] range","detail":"{range_begin:/registry/flowschemas; range_end:; response_count:0; response_revision:752; }","duration":"469.93891ms","start":"2025-10-18T12:26:49.864053Z","end":"2025-10-18T12:26:50.333992Z","steps":["trace[512175939] 'agreement among raft nodes before linearized reading'  (duration: 469.85272ms)"],"step_count":1}
	{"level":"warn","ts":"2025-10-18T12:26:50.334133Z","caller":"v3rpc/interceptor.go:202","msg":"request stats","start time":"2025-10-18T12:26:49.864029Z","time spent":"469.995ms","remote":"127.0.0.1:59436","response type":"/etcdserverpb.KV/Range","request count":0,"request size":25,"response count":0,"response size":27,"request content":"key:\"/registry/flowschemas\" limit:1 "}
	{"level":"info","ts":"2025-10-18T12:26:50.334869Z","caller":"traceutil/trace.go:172","msg":"trace[1055338688] transaction","detail":"{read_only:false; number_of_response:0; response_revision:752; }","duration":"495.901712ms","start":"2025-10-18T12:26:49.838955Z","end":"2025-10-18T12:26:50.334857Z","steps":["trace[1055338688] 'process raft request'  (duration: 495.716875ms)"],"step_count":1}
	{"level":"warn","ts":"2025-10-18T12:26:50.335648Z","caller":"v3rpc/interceptor.go:202","msg":"request stats","start time":"2025-10-18T12:26:49.838929Z","time spent":"495.989792ms","remote":"127.0.0.1:58854","response type":"/etcdserverpb.KV/Txn","request count":0,"request size":0,"response count":0,"response size":27,"request content":"compare:<target:MOD key:\"/registry/pods/kube-system/kube-scheduler-default-k8s-diff-port-948988\" mod_revision:0 > success:<request_put:<key:\"/registry/pods/kube-system/kube-scheduler-default-k8s-diff-port-948988\" value_size:3336 >> failure:<>"}
	{"level":"info","ts":"2025-10-18T12:26:50.443549Z","caller":"traceutil/trace.go:172","msg":"trace[381001447] linearizableReadLoop","detail":"{readStateIndex:806; appliedIndex:806; }","duration":"109.522762ms","start":"2025-10-18T12:26:50.333879Z","end":"2025-10-18T12:26:50.443401Z","steps":["trace[381001447] 'read index received'  (duration: 109.304835ms)","trace[381001447] 'applied index is now lower than readState.Index'  (duration: 216.349µs)"],"step_count":2}
	{"level":"warn","ts":"2025-10-18T12:26:50.443898Z","caller":"txn/util.go:93","msg":"apply request took too long","took":"254.661283ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/health\" ","response":"range_response_count:0 size:5"}
	{"level":"info","ts":"2025-10-18T12:26:50.444087Z","caller":"traceutil/trace.go:172","msg":"trace[269629089] range","detail":"{range_begin:/registry/health; range_end:; response_count:0; response_revision:752; }","duration":"254.861648ms","start":"2025-10-18T12:26:50.189213Z","end":"2025-10-18T12:26:50.444075Z","steps":["trace[269629089] 'agreement among raft nodes before linearized reading'  (duration: 254.569015ms)"],"step_count":1}
	{"level":"info","ts":"2025-10-18T12:26:50.444986Z","caller":"traceutil/trace.go:172","msg":"trace[1424081342] transaction","detail":"{read_only:false; number_of_response:0; response_revision:752; }","duration":"604.238859ms","start":"2025-10-18T12:26:49.840736Z","end":"2025-10-18T12:26:50.444975Z","steps":["trace[1424081342] 'process raft request'  (duration: 603.242308ms)"],"step_count":1}
	{"level":"warn","ts":"2025-10-18T12:26:50.445058Z","caller":"txn/util.go:93","msg":"apply request took too long","took":"481.542092ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/certificatesigningrequests\" limit:1 ","response":"range_response_count:0 size:5"}
	{"level":"warn","ts":"2025-10-18T12:26:50.445075Z","caller":"v3rpc/interceptor.go:202","msg":"request stats","start time":"2025-10-18T12:26:49.840723Z","time spent":"604.304586ms","remote":"127.0.0.1:58854","response type":"/etcdserverpb.KV/Txn","request count":0,"request size":0,"response count":0,"response size":27,"request content":"compare:<target:MOD key:\"/registry/pods/kube-system/kube-controller-manager-default-k8s-diff-port-948988\" mod_revision:0 > success:<request_put:<key:\"/registry/pods/kube-system/kube-controller-manager-default-k8s-diff-port-948988\" value_size:5080 >> failure:<>"}
	{"level":"info","ts":"2025-10-18T12:26:50.445122Z","caller":"traceutil/trace.go:172","msg":"trace[399968637] range","detail":"{range_begin:/registry/certificatesigningrequests; range_end:; response_count:0; response_revision:752; }","duration":"481.574042ms","start":"2025-10-18T12:26:49.963502Z","end":"2025-10-18T12:26:50.445076Z","steps":["trace[399968637] 'agreement among raft nodes before linearized reading'  (duration: 481.324719ms)"],"step_count":1}
	{"level":"warn","ts":"2025-10-18T12:26:50.445200Z","caller":"v3rpc/interceptor.go:202","msg":"request stats","start time":"2025-10-18T12:26:49.963483Z","time spent":"481.704642ms","remote":"127.0.0.1:58990","response type":"/etcdserverpb.KV/Range","request count":0,"request size":40,"response count":0,"response size":27,"request content":"key:\"/registry/certificatesigningrequests\" limit:1 "}
	{"level":"info","ts":"2025-10-18T12:26:50.446712Z","caller":"traceutil/trace.go:172","msg":"trace[824860143] transaction","detail":"{read_only:false; number_of_response:0; response_revision:752; }","duration":"606.054697ms","start":"2025-10-18T12:26:49.840601Z","end":"2025-10-18T12:26:50.446656Z","steps":["trace[824860143] 'process raft request'  (duration: 603.007187ms)"],"step_count":1}
	{"level":"warn","ts":"2025-10-18T12:26:50.446779Z","caller":"v3rpc/interceptor.go:202","msg":"request stats","start time":"2025-10-18T12:26:49.840584Z","time spent":"606.160126ms","remote":"127.0.0.1:58854","response type":"/etcdserverpb.KV/Txn","request count":0,"request size":0,"response count":0,"response size":27,"request content":"compare:<target:MOD key:\"/registry/pods/kube-system/kube-apiserver-default-k8s-diff-port-948988\" mod_revision:0 > success:<request_put:<key:\"/registry/pods/kube-system/kube-apiserver-default-k8s-diff-port-948988\" value_size:5531 >> failure:<>"}
	{"level":"info","ts":"2025-10-18T12:26:50.446897Z","caller":"traceutil/trace.go:172","msg":"trace[1942397087] transaction","detail":"{read_only:false; number_of_response:0; response_revision:752; }","duration":"606.190325ms","start":"2025-10-18T12:26:49.840699Z","end":"2025-10-18T12:26:50.446890Z","steps":["trace[1942397087] 'process raft request'  (duration: 603.239357ms)"],"step_count":1}
	{"level":"warn","ts":"2025-10-18T12:26:50.446935Z","caller":"v3rpc/interceptor.go:202","msg":"request stats","start time":"2025-10-18T12:26:49.840694Z","time spent":"606.222506ms","remote":"127.0.0.1:58854","response type":"/etcdserverpb.KV/Txn","request count":0,"request size":0,"response count":0,"response size":27,"request content":"compare:<target:MOD key:\"/registry/pods/kube-system/etcd-default-k8s-diff-port-948988\" mod_revision:0 > success:<request_put:<key:\"/registry/pods/kube-system/etcd-default-k8s-diff-port-948988\" value_size:4413 >> failure:<>"}
	{"level":"warn","ts":"2025-10-18T12:26:50.446998Z","caller":"txn/util.go:93","msg":"apply request took too long","took":"106.548699ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/pods/kube-system/kube-scheduler-default-k8s-diff-port-948988\" limit:1 ","response":"range_response_count:1 size:4976"}
	{"level":"info","ts":"2025-10-18T12:26:50.447420Z","caller":"traceutil/trace.go:172","msg":"trace[673088281] range","detail":"{range_begin:/registry/pods/kube-system/kube-scheduler-default-k8s-diff-port-948988; range_end:; response_count:1; response_revision:753; }","duration":"106.587183ms","start":"2025-10-18T12:26:50.340430Z","end":"2025-10-18T12:26:50.447017Z","steps":["trace[673088281] 'agreement among raft nodes before linearized reading'  (duration: 106.46749ms)"],"step_count":1}
	{"level":"info","ts":"2025-10-18T12:26:50.448436Z","caller":"traceutil/trace.go:172","msg":"trace[1596410668] transaction","detail":"{read_only:false; response_revision:753; number_of_response:1; }","duration":"250.464751ms","start":"2025-10-18T12:26:50.197959Z","end":"2025-10-18T12:26:50.448424Z","steps":["trace[1596410668] 'process raft request'  (duration: 246.217803ms)"],"step_count":1}
	{"level":"warn","ts":"2025-10-18T12:26:50.448558Z","caller":"txn/util.go:93","msg":"apply request took too long","took":"100.631999ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/runtimeclasses\" limit:1 ","response":"range_response_count:0 size:5"}
	{"level":"info","ts":"2025-10-18T12:26:50.448589Z","caller":"traceutil/trace.go:172","msg":"trace[1722869229] range","detail":"{range_begin:/registry/runtimeclasses; range_end:; response_count:0; response_revision:753; }","duration":"100.661173ms","start":"2025-10-18T12:26:50.347914Z","end":"2025-10-18T12:26:50.448575Z","steps":["trace[1722869229] 'agreement among raft nodes before linearized reading'  (duration: 100.605021ms)"],"step_count":1}
	
	
	==> etcd [5a3d271b1a7a] <==
	{"level":"info","ts":"2025-10-18T12:24:40.137898Z","caller":"traceutil/trace.go:172","msg":"trace[1031995627] transaction","detail":"{read_only:false; response_revision:365; number_of_response:1; }","duration":"153.504515ms","start":"2025-10-18T12:24:39.984387Z","end":"2025-10-18T12:24:40.137891Z","steps":["trace[1031995627] 'process raft request'  (duration: 106.790781ms)","trace[1031995627] 'compare'  (duration: 46.286033ms)"],"step_count":2}
	{"level":"info","ts":"2025-10-18T12:24:40.138807Z","caller":"traceutil/trace.go:172","msg":"trace[2073145057] transaction","detail":"{read_only:false; response_revision:366; number_of_response:1; }","duration":"154.722362ms","start":"2025-10-18T12:24:39.984073Z","end":"2025-10-18T12:24:40.138795Z","steps":["trace[2073145057] 'process raft request'  (duration: 153.550593ms)"],"step_count":1}
	{"level":"info","ts":"2025-10-18T12:24:40.138990Z","caller":"traceutil/trace.go:172","msg":"trace[460852249] transaction","detail":"{read_only:false; response_revision:367; number_of_response:1; }","duration":"147.204006ms","start":"2025-10-18T12:24:39.991724Z","end":"2025-10-18T12:24:40.138928Z","steps":["trace[460852249] 'process raft request'  (duration: 145.946011ms)"],"step_count":1}
	{"level":"info","ts":"2025-10-18T12:24:40.139208Z","caller":"traceutil/trace.go:172","msg":"trace[1691503075] transaction","detail":"{read_only:false; response_revision:368; number_of_response:1; }","duration":"130.816492ms","start":"2025-10-18T12:24:40.008382Z","end":"2025-10-18T12:24:40.139199Z","steps":["trace[1691503075] 'process raft request'  (duration: 129.325269ms)"],"step_count":1}
	{"level":"info","ts":"2025-10-18T12:24:40.144497Z","caller":"traceutil/trace.go:172","msg":"trace[842550493] transaction","detail":"{read_only:false; response_revision:369; number_of_response:1; }","duration":"135.72185ms","start":"2025-10-18T12:24:40.008758Z","end":"2025-10-18T12:24:40.144480Z","steps":["trace[842550493] 'process raft request'  (duration: 128.981035ms)"],"step_count":1}
	{"level":"warn","ts":"2025-10-18T12:24:40.144822Z","caller":"txn/util.go:93","msg":"apply request took too long","took":"122.354219ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/serviceaccounts/kube-system/replicaset-controller\" limit:1 ","response":"range_response_count:1 size:207"}
	{"level":"info","ts":"2025-10-18T12:24:40.144866Z","caller":"traceutil/trace.go:172","msg":"trace[397740631] range","detail":"{range_begin:/registry/serviceaccounts/kube-system/replicaset-controller; range_end:; response_count:1; response_revision:370; }","duration":"122.41407ms","start":"2025-10-18T12:24:40.022443Z","end":"2025-10-18T12:24:40.144857Z","steps":["trace[397740631] 'agreement among raft nodes before linearized reading'  (duration: 122.2939ms)"],"step_count":1}
	{"level":"info","ts":"2025-10-18T12:25:00.231361Z","caller":"osutil/interrupt_unix.go:65","msg":"received signal; shutting down","signal":"terminated"}
	{"level":"info","ts":"2025-10-18T12:25:00.231451Z","caller":"embed/etcd.go:426","msg":"closing etcd server","name":"default-k8s-diff-port-948988","data-dir":"/var/lib/minikube/etcd","advertise-peer-urls":["https://192.168.61.154:2380"],"advertise-client-urls":["https://192.168.61.154:2379"]}
	{"level":"error","ts":"2025-10-18T12:25:00.231556Z","caller":"embed/etcd.go:912","msg":"setting up serving from embedded etcd failed.","error":"http: Server closed","stacktrace":"go.etcd.io/etcd/server/v3/embed.(*Etcd).errHandler\n\tgo.etcd.io/etcd/server/v3/embed/etcd.go:912\ngo.etcd.io/etcd/server/v3/embed.(*serveCtx).startHandler.func1\n\tgo.etcd.io/etcd/server/v3/embed/serve.go:90"}
	{"level":"error","ts":"2025-10-18T12:25:07.245321Z","caller":"embed/etcd.go:912","msg":"setting up serving from embedded etcd failed.","error":"http: Server closed","stacktrace":"go.etcd.io/etcd/server/v3/embed.(*Etcd).errHandler\n\tgo.etcd.io/etcd/server/v3/embed/etcd.go:912\ngo.etcd.io/etcd/server/v3/embed.(*serveCtx).startHandler.func1\n\tgo.etcd.io/etcd/server/v3/embed/serve.go:90"}
	{"level":"error","ts":"2025-10-18T12:25:07.249128Z","caller":"embed/etcd.go:912","msg":"setting up serving from embedded etcd failed.","error":"accept tcp 127.0.0.1:2381: use of closed network connection","stacktrace":"go.etcd.io/etcd/server/v3/embed.(*Etcd).errHandler\n\tgo.etcd.io/etcd/server/v3/embed/etcd.go:912\ngo.etcd.io/etcd/server/v3/embed.(*Etcd).startHandler.func1\n\tgo.etcd.io/etcd/server/v3/embed/etcd.go:906"}
	{"level":"info","ts":"2025-10-18T12:25:07.249192Z","caller":"etcdserver/server.go:1281","msg":"skipped leadership transfer for single voting member cluster","local-member-id":"3cb84593c3b1392d","current-leader-member-id":"3cb84593c3b1392d"}
	{"level":"info","ts":"2025-10-18T12:25:07.249489Z","caller":"etcdserver/server.go:2342","msg":"server has stopped; stopping storage version's monitor"}
	{"level":"info","ts":"2025-10-18T12:25:07.249534Z","caller":"etcdserver/server.go:2319","msg":"server has stopped; stopping cluster version's monitor"}
	{"level":"warn","ts":"2025-10-18T12:25:07.252745Z","caller":"embed/serve.go:245","msg":"stopping secure grpc server due to error","error":"accept tcp 127.0.0.1:2379: use of closed network connection"}
	{"level":"warn","ts":"2025-10-18T12:25:07.252848Z","caller":"embed/serve.go:247","msg":"stopped secure grpc server due to error","error":"accept tcp 127.0.0.1:2379: use of closed network connection"}
	{"level":"error","ts":"2025-10-18T12:25:07.252863Z","caller":"embed/etcd.go:912","msg":"setting up serving from embedded etcd failed.","error":"accept tcp 127.0.0.1:2379: use of closed network connection","stacktrace":"go.etcd.io/etcd/server/v3/embed.(*Etcd).errHandler\n\tgo.etcd.io/etcd/server/v3/embed/etcd.go:912\ngo.etcd.io/etcd/server/v3/embed.(*Etcd).startHandler.func1\n\tgo.etcd.io/etcd/server/v3/embed/etcd.go:906"}
	{"level":"warn","ts":"2025-10-18T12:25:07.253498Z","caller":"embed/serve.go:245","msg":"stopping secure grpc server due to error","error":"accept tcp 192.168.61.154:2379: use of closed network connection"}
	{"level":"warn","ts":"2025-10-18T12:25:07.253553Z","caller":"embed/serve.go:247","msg":"stopped secure grpc server due to error","error":"accept tcp 192.168.61.154:2379: use of closed network connection"}
	{"level":"error","ts":"2025-10-18T12:25:07.253569Z","caller":"embed/etcd.go:912","msg":"setting up serving from embedded etcd failed.","error":"accept tcp 192.168.61.154:2379: use of closed network connection","stacktrace":"go.etcd.io/etcd/server/v3/embed.(*Etcd).errHandler\n\tgo.etcd.io/etcd/server/v3/embed/etcd.go:912\ngo.etcd.io/etcd/server/v3/embed.(*Etcd).startHandler.func1\n\tgo.etcd.io/etcd/server/v3/embed/etcd.go:906"}
	{"level":"info","ts":"2025-10-18T12:25:07.256384Z","caller":"embed/etcd.go:621","msg":"stopping serving peer traffic","address":"192.168.61.154:2380"}
	{"level":"error","ts":"2025-10-18T12:25:07.256475Z","caller":"embed/etcd.go:912","msg":"setting up serving from embedded etcd failed.","error":"accept tcp 192.168.61.154:2380: use of closed network connection","stacktrace":"go.etcd.io/etcd/server/v3/embed.(*Etcd).errHandler\n\tgo.etcd.io/etcd/server/v3/embed/etcd.go:912\ngo.etcd.io/etcd/server/v3/embed.(*Etcd).startHandler.func1\n\tgo.etcd.io/etcd/server/v3/embed/etcd.go:906"}
	{"level":"info","ts":"2025-10-18T12:25:07.256703Z","caller":"embed/etcd.go:626","msg":"stopped serving peer traffic","address":"192.168.61.154:2380"}
	{"level":"info","ts":"2025-10-18T12:25:07.256718Z","caller":"embed/etcd.go:428","msg":"closed etcd server","name":"default-k8s-diff-port-948988","data-dir":"/var/lib/minikube/etcd","advertise-peer-urls":["https://192.168.61.154:2380"],"advertise-client-urls":["https://192.168.61.154:2379"]}
	
	
	==> kernel <==
	 12:26:53 up 1 min,  0 users,  load average: 2.38, 0.75, 0.26
	Linux default-k8s-diff-port-948988 6.6.95 #1 SMP PREEMPT_DYNAMIC Thu Oct 16 13:22:30 UTC 2025 x86_64 GNU/Linux
	PRETTY_NAME="Buildroot 2025.02"
	
	
	==> kube-apiserver [482f645840fb] <==
	E1018 12:25:43.880029       1 controller.go:102] "Unhandled Error" err=<
		loading OpenAPI spec for "v1beta1.metrics.k8s.io" failed with: failed to download v1beta1.metrics.k8s.io: failed to retrieve openAPI spec, http error: ResponseCode: 503, Body: service unavailable
		, Header: map[Content-Type:[text/plain; charset=utf-8] X-Content-Type-Options:[nosniff]]
	 > logger="UnhandledError"
	I1018 12:25:43.880149       1 controller.go:109] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	I1018 12:25:43.881283       1 controller.go:126] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	I1018 12:25:44.600365       1 controller.go:667] quota admission added evaluator for: deployments.apps
	I1018 12:25:44.665650       1 controller.go:667] quota admission added evaluator for: daemonsets.apps
	I1018 12:25:44.707914       1 controller.go:667] quota admission added evaluator for: roles.rbac.authorization.k8s.io
	I1018 12:25:44.717555       1 controller.go:667] quota admission added evaluator for: rolebindings.rbac.authorization.k8s.io
	I1018 12:25:46.458993       1 controller.go:667] quota admission added evaluator for: endpoints
	I1018 12:25:46.554520       1 controller.go:667] quota admission added evaluator for: endpointslices.discovery.k8s.io
	I1018 12:25:46.699128       1 controller.go:667] quota admission added evaluator for: replicasets.apps
	I1018 12:25:47.509491       1 controller.go:667] quota admission added evaluator for: namespaces
	I1018 12:25:47.794476       1 alloc.go:328] "allocated clusterIPs" service="kubernetes-dashboard/kubernetes-dashboard" clusterIPs={"IPv4":"10.98.100.186"}
	I1018 12:25:47.820795       1 alloc.go:328] "allocated clusterIPs" service="kubernetes-dashboard/dashboard-metrics-scraper" clusterIPs={"IPv4":"10.97.78.66"}
	W1018 12:26:47.665841       1 handler_proxy.go:99] no RequestInfo found in the context
	E1018 12:26:47.666026       1 controller.go:102] "Unhandled Error" err=<
		loading OpenAPI spec for "v1beta1.metrics.k8s.io" failed with: failed to download v1beta1.metrics.k8s.io: failed to retrieve openAPI spec, http error: ResponseCode: 503, Body: service unavailable
		, Header: map[Content-Type:[text/plain; charset=utf-8] X-Content-Type-Options:[nosniff]]
	 > logger="UnhandledError"
	I1018 12:26:47.666042       1 controller.go:109] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	W1018 12:26:47.681677       1 handler_proxy.go:99] no RequestInfo found in the context
	E1018 12:26:47.681971       1 controller.go:113] "Unhandled Error" err="loading OpenAPI spec for \"v1beta1.metrics.k8s.io\" failed with: Error, could not get list of group versions for APIService" logger="UnhandledError"
	I1018 12:26:47.682341       1 controller.go:126] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	
	
	==> kube-apiserver [5dfc625534d2] <==
	W1018 12:25:09.464721       1 logging.go:55] [core] [Channel #95 SubChannel #97]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1018 12:25:09.517443       1 logging.go:55] [core] [Channel #227 SubChannel #229]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1018 12:25:09.620363       1 logging.go:55] [core] [Channel #251 SubChannel #253]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1018 12:25:09.693884       1 logging.go:55] [core] [Channel #155 SubChannel #157]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1018 12:25:09.721047       1 logging.go:55] [core] [Channel #223 SubChannel #225]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1018 12:25:09.726611       1 logging.go:55] [core] [Channel #179 SubChannel #181]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1018 12:25:09.759371       1 logging.go:55] [core] [Channel #107 SubChannel #109]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1018 12:25:09.795061       1 logging.go:55] [core] [Channel #75 SubChannel #77]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1018 12:25:09.819207       1 logging.go:55] [core] [Channel #47 SubChannel #49]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1018 12:25:09.841071       1 logging.go:55] [core] [Channel #171 SubChannel #173]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1018 12:25:09.864445       1 logging.go:55] [core] [Channel #215 SubChannel #217]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1018 12:25:09.896679       1 logging.go:55] [core] [Channel #235 SubChannel #237]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1018 12:25:09.930411       1 logging.go:55] [core] [Channel #123 SubChannel #125]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1018 12:25:09.971423       1 logging.go:55] [core] [Channel #211 SubChannel #213]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1018 12:25:10.017882       1 logging.go:55] [core] [Channel #39 SubChannel #41]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1018 12:25:10.045148       1 logging.go:55] [core] [Channel #67 SubChannel #69]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1018 12:25:10.067233       1 logging.go:55] [core] [Channel #147 SubChannel #149]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1018 12:25:10.127112       1 logging.go:55] [core] [Channel #1 SubChannel #4]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1018 12:25:10.133877       1 logging.go:55] [core] [Channel #131 SubChannel #133]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1018 12:25:10.157359       1 logging.go:55] [core] [Channel #255 SubChannel #257]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1018 12:25:10.165740       1 logging.go:55] [core] [Channel #207 SubChannel #209]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1018 12:25:10.173381       1 logging.go:55] [core] [Channel #247 SubChannel #249]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1018 12:25:10.191257       1 logging.go:55] [core] [Channel #187 SubChannel #189]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1018 12:25:10.254823       1 logging.go:55] [core] [Channel #203 SubChannel #205]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1018 12:25:10.300085       1 logging.go:55] [core] [Channel #55 SubChannel #57]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	
	
	==> kube-controller-manager [07dc691cd2b4] <==
	I1018 12:24:37.212816       1 shared_informer.go:356] "Caches are synced" controller="certificate-csrsigning-kube-apiserver-client"
	I1018 12:24:37.213552       1 node_lifecycle_controller.go:1067] "Controller detected that zone is now in new state" logger="node-lifecycle-controller" zone="" newState="Normal"
	I1018 12:24:37.214863       1 shared_informer.go:356] "Caches are synced" controller="resource_claim"
	I1018 12:24:37.215195       1 shared_informer.go:356] "Caches are synced" controller="service account"
	I1018 12:24:37.215506       1 shared_informer.go:356] "Caches are synced" controller="certificate-csrsigning-kubelet-client"
	I1018 12:24:37.215712       1 shared_informer.go:356] "Caches are synced" controller="certificate-csrsigning-legacy-unknown"
	I1018 12:24:37.215992       1 shared_informer.go:356] "Caches are synced" controller="PV protection"
	I1018 12:24:37.216210       1 shared_informer.go:356] "Caches are synced" controller="endpoint_slice"
	I1018 12:24:37.216297       1 shared_informer.go:356] "Caches are synced" controller="ReplicationController"
	I1018 12:24:37.220772       1 shared_informer.go:356] "Caches are synced" controller="resource quota"
	I1018 12:24:37.221277       1 shared_informer.go:356] "Caches are synced" controller="disruption"
	I1018 12:24:37.229865       1 range_allocator.go:428] "Set node PodCIDR" logger="node-ipam-controller" node="default-k8s-diff-port-948988" podCIDRs=["10.244.0.0/24"]
	I1018 12:24:37.230483       1 shared_informer.go:356] "Caches are synced" controller="legacy-service-account-token-cleaner"
	I1018 12:24:37.235336       1 shared_informer.go:356] "Caches are synced" controller="ClusterRoleAggregator"
	I1018 12:24:37.236208       1 shared_informer.go:356] "Caches are synced" controller="deployment"
	I1018 12:24:37.243773       1 shared_informer.go:356] "Caches are synced" controller="ReplicaSet"
	I1018 12:24:37.261496       1 shared_informer.go:356] "Caches are synced" controller="endpoint_slice_mirroring"
	I1018 12:24:37.262756       1 shared_informer.go:356] "Caches are synced" controller="service-cidr-controller"
	I1018 12:24:37.263515       1 shared_informer.go:356] "Caches are synced" controller="stateful set"
	I1018 12:24:37.263680       1 shared_informer.go:356] "Caches are synced" controller="attach detach"
	I1018 12:24:37.332884       1 shared_informer.go:349] "Waiting for caches to sync" controller="garbage collector"
	I1018 12:24:37.408817       1 shared_informer.go:356] "Caches are synced" controller="garbage collector"
	I1018 12:24:37.409172       1 garbagecollector.go:154] "Garbage collector: all resource monitors have synced" logger="garbage-collector-controller"
	I1018 12:24:37.409412       1 garbagecollector.go:157] "Proceeding to collect garbage" logger="garbage-collector-controller"
	I1018 12:24:37.433850       1 shared_informer.go:356] "Caches are synced" controller="garbage collector"
	
	
	==> kube-controller-manager [cbcb65b91df5] <==
	I1018 12:25:46.326514       1 shared_informer.go:356] "Caches are synced" controller="legacy-service-account-token-cleaner"
	I1018 12:25:46.330568       1 shared_informer.go:356] "Caches are synced" controller="taint-eviction-controller"
	I1018 12:25:46.338200       1 shared_informer.go:356] "Caches are synced" controller="ReplicaSet"
	I1018 12:25:46.354827       1 shared_informer.go:356] "Caches are synced" controller="disruption"
	I1018 12:25:46.354933       1 shared_informer.go:356] "Caches are synced" controller="attach detach"
	I1018 12:25:46.358135       1 shared_informer.go:356] "Caches are synced" controller="garbage collector"
	I1018 12:25:46.358166       1 garbagecollector.go:154] "Garbage collector: all resource monitors have synced" logger="garbage-collector-controller"
	I1018 12:25:46.358174       1 garbagecollector.go:157] "Proceeding to collect garbage" logger="garbage-collector-controller"
	I1018 12:25:46.361699       1 shared_informer.go:356] "Caches are synced" controller="taint"
	I1018 12:25:46.362331       1 node_lifecycle_controller.go:1221] "Initializing eviction metric for zone" logger="node-lifecycle-controller" zone=""
	I1018 12:25:46.362518       1 node_lifecycle_controller.go:873] "Missing timestamp for Node. Assuming now as a timestamp" logger="node-lifecycle-controller" node="default-k8s-diff-port-948988"
	I1018 12:25:46.362582       1 node_lifecycle_controller.go:1025] "Controller detected that all Nodes are not-Ready. Entering master disruption mode" logger="node-lifecycle-controller"
	I1018 12:25:46.362715       1 shared_informer.go:356] "Caches are synced" controller="ephemeral"
	I1018 12:25:46.364998       1 shared_informer.go:356] "Caches are synced" controller="GC"
	I1018 12:25:46.397419       1 shared_informer.go:356] "Caches are synced" controller="garbage collector"
	E1018 12:25:47.622164       1 replica_set.go:587] "Unhandled Error" err="sync \"kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9\" failed with pods \"dashboard-metrics-scraper-6ffb444bf9-\" is forbidden: error looking up service account kubernetes-dashboard/kubernetes-dashboard: serviceaccount \"kubernetes-dashboard\" not found" logger="UnhandledError"
	E1018 12:25:47.637442       1 replica_set.go:587] "Unhandled Error" err="sync \"kubernetes-dashboard/kubernetes-dashboard-855c9754f9\" failed with pods \"kubernetes-dashboard-855c9754f9-\" is forbidden: error looking up service account kubernetes-dashboard/kubernetes-dashboard: serviceaccount \"kubernetes-dashboard\" not found" logger="UnhandledError"
	E1018 12:25:47.640602       1 replica_set.go:587] "Unhandled Error" err="sync \"kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9\" failed with pods \"dashboard-metrics-scraper-6ffb444bf9-\" is forbidden: error looking up service account kubernetes-dashboard/kubernetes-dashboard: serviceaccount \"kubernetes-dashboard\" not found" logger="UnhandledError"
	E1018 12:25:47.654283       1 replica_set.go:587] "Unhandled Error" err="sync \"kubernetes-dashboard/kubernetes-dashboard-855c9754f9\" failed with pods \"kubernetes-dashboard-855c9754f9-\" is forbidden: error looking up service account kubernetes-dashboard/kubernetes-dashboard: serviceaccount \"kubernetes-dashboard\" not found" logger="UnhandledError"
	E1018 12:25:47.654837       1 replica_set.go:587] "Unhandled Error" err="sync \"kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9\" failed with pods \"dashboard-metrics-scraper-6ffb444bf9-\" is forbidden: error looking up service account kubernetes-dashboard/kubernetes-dashboard: serviceaccount \"kubernetes-dashboard\" not found" logger="UnhandledError"
	E1018 12:25:47.670862       1 replica_set.go:587] "Unhandled Error" err="sync \"kubernetes-dashboard/kubernetes-dashboard-855c9754f9\" failed with pods \"kubernetes-dashboard-855c9754f9-\" is forbidden: error looking up service account kubernetes-dashboard/kubernetes-dashboard: serviceaccount \"kubernetes-dashboard\" not found" logger="UnhandledError"
	E1018 12:25:47.673502       1 replica_set.go:587] "Unhandled Error" err="sync \"kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9\" failed with pods \"dashboard-metrics-scraper-6ffb444bf9-\" is forbidden: error looking up service account kubernetes-dashboard/kubernetes-dashboard: serviceaccount \"kubernetes-dashboard\" not found" logger="UnhandledError"
	I1018 12:25:56.364778       1 node_lifecycle_controller.go:1044] "Controller detected that some Nodes are Ready. Exiting master disruption mode" logger="node-lifecycle-controller"
	E1018 12:26:47.748771       1 resource_quota_controller.go:446] "Unhandled Error" err="unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1" logger="UnhandledError"
	I1018 12:26:47.764048       1 garbagecollector.go:787] "failed to discover some groups" logger="garbage-collector-controller" groups="map[\"metrics.k8s.io/v1beta1\":\"stale GroupVersion discovery: metrics.k8s.io/v1beta1\"]"
	
	
	==> kube-proxy [72d0dd1b3e6d] <==
	I1018 12:24:41.564008       1 shared_informer.go:349] "Waiting for caches to sync" controller="node informer cache"
	I1018 12:24:41.664708       1 shared_informer.go:356] "Caches are synced" controller="node informer cache"
	I1018 12:24:41.664884       1 server.go:219] "Successfully retrieved NodeIPs" NodeIPs=["192.168.61.154"]
	E1018 12:24:41.665067       1 server.go:256] "Kube-proxy configuration may be incomplete or incorrect" err="nodePortAddresses is unset; NodePort connections will be accepted on all local IPs. Consider using `--nodeport-addresses primary`"
	I1018 12:24:41.766806       1 server_linux.go:103] "No iptables support for family" ipFamily="IPv6" error=<
		error listing chain "POSTROUTING" in table "nat": exit status 3: ip6tables v1.8.9 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
		Perhaps ip6tables or your kernel needs to be upgraded.
	 >
	I1018 12:24:41.766902       1 server.go:267] "kube-proxy running in single-stack mode" ipFamily="IPv4"
	I1018 12:24:41.767037       1 server_linux.go:132] "Using iptables Proxier"
	I1018 12:24:41.808707       1 proxier.go:242] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses" ipFamily="IPv4"
	I1018 12:24:41.810126       1 server.go:527] "Version info" version="v1.34.1"
	I1018 12:24:41.810170       1 server.go:529] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1018 12:24:41.819567       1 config.go:200] "Starting service config controller"
	I1018 12:24:41.819614       1 shared_informer.go:349] "Waiting for caches to sync" controller="service config"
	I1018 12:24:41.819656       1 config.go:106] "Starting endpoint slice config controller"
	I1018 12:24:41.819662       1 shared_informer.go:349] "Waiting for caches to sync" controller="endpoint slice config"
	I1018 12:24:41.819679       1 config.go:403] "Starting serviceCIDR config controller"
	I1018 12:24:41.819685       1 shared_informer.go:349] "Waiting for caches to sync" controller="serviceCIDR config"
	I1018 12:24:41.834904       1 config.go:309] "Starting node config controller"
	I1018 12:24:41.835028       1 shared_informer.go:349] "Waiting for caches to sync" controller="node config"
	I1018 12:24:41.835056       1 shared_informer.go:356] "Caches are synced" controller="node config"
	I1018 12:24:41.927064       1 shared_informer.go:356] "Caches are synced" controller="serviceCIDR config"
	I1018 12:24:41.927258       1 shared_informer.go:356] "Caches are synced" controller="service config"
	I1018 12:24:41.927530       1 shared_informer.go:356] "Caches are synced" controller="endpoint slice config"
	
	
	==> kube-proxy [e74b601e6b20] <==
	I1018 12:25:45.811654       1 shared_informer.go:349] "Waiting for caches to sync" controller="node informer cache"
	I1018 12:25:45.913019       1 shared_informer.go:356] "Caches are synced" controller="node informer cache"
	I1018 12:25:45.913130       1 server.go:219] "Successfully retrieved NodeIPs" NodeIPs=["192.168.61.154"]
	E1018 12:25:45.913538       1 server.go:256] "Kube-proxy configuration may be incomplete or incorrect" err="nodePortAddresses is unset; NodePort connections will be accepted on all local IPs. Consider using `--nodeport-addresses primary`"
	I1018 12:25:46.627631       1 server_linux.go:103] "No iptables support for family" ipFamily="IPv6" error=<
		error listing chain "POSTROUTING" in table "nat": exit status 3: ip6tables v1.8.9 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
		Perhaps ip6tables or your kernel needs to be upgraded.
	 >
	I1018 12:25:46.627729       1 server.go:267] "kube-proxy running in single-stack mode" ipFamily="IPv4"
	I1018 12:25:46.627769       1 server_linux.go:132] "Using iptables Proxier"
	I1018 12:25:46.729383       1 proxier.go:242] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses" ipFamily="IPv4"
	I1018 12:25:46.742257       1 server.go:527] "Version info" version="v1.34.1"
	I1018 12:25:46.742299       1 server.go:529] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1018 12:25:46.769189       1 config.go:309] "Starting node config controller"
	I1018 12:25:46.769207       1 shared_informer.go:349] "Waiting for caches to sync" controller="node config"
	I1018 12:25:46.769215       1 shared_informer.go:356] "Caches are synced" controller="node config"
	I1018 12:25:46.772876       1 config.go:403] "Starting serviceCIDR config controller"
	I1018 12:25:46.772985       1 shared_informer.go:349] "Waiting for caches to sync" controller="serviceCIDR config"
	I1018 12:25:46.773282       1 config.go:200] "Starting service config controller"
	I1018 12:25:46.773361       1 shared_informer.go:349] "Waiting for caches to sync" controller="service config"
	I1018 12:25:46.773393       1 config.go:106] "Starting endpoint slice config controller"
	I1018 12:25:46.773398       1 shared_informer.go:349] "Waiting for caches to sync" controller="endpoint slice config"
	I1018 12:25:46.874997       1 shared_informer.go:356] "Caches are synced" controller="endpoint slice config"
	I1018 12:25:46.875472       1 shared_informer.go:356] "Caches are synced" controller="serviceCIDR config"
	I1018 12:25:46.875491       1 shared_informer.go:356] "Caches are synced" controller="service config"
	
	
	==> kube-scheduler [aa45133c5292] <==
	I1018 12:25:40.892121       1 serving.go:386] Generated self-signed cert in-memory
	W1018 12:25:42.779818       1 requestheader_controller.go:204] Unable to get configmap/extension-apiserver-authentication in kube-system.  Usually fixed by 'kubectl create rolebinding -n kube-system ROLEBINDING_NAME --role=extension-apiserver-authentication-reader --serviceaccount=YOUR_NS:YOUR_SA'
	W1018 12:25:42.779913       1 authentication.go:397] Error looking up in-cluster authentication configuration: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot get resource "configmaps" in API group "" in the namespace "kube-system"
	W1018 12:25:42.779937       1 authentication.go:398] Continuing without authentication configuration. This may treat all requests as anonymous.
	W1018 12:25:42.779952       1 authentication.go:399] To require authentication configuration lookup to succeed, set --authentication-tolerate-lookup-failure=false
	I1018 12:25:42.837530       1 server.go:175] "Starting Kubernetes Scheduler" version="v1.34.1"
	I1018 12:25:42.837672       1 server.go:177] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1018 12:25:42.850332       1 configmap_cafile_content.go:205] "Starting controller" name="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I1018 12:25:42.850953       1 secure_serving.go:211] Serving securely on 127.0.0.1:10259
	I1018 12:25:42.851127       1 shared_informer.go:349] "Waiting for caches to sync" controller="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I1018 12:25:42.851921       1 tlsconfig.go:243] "Starting DynamicServingCertificateController"
	I1018 12:25:42.953076       1 shared_informer.go:356] "Caches are synced" controller="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	
	
	==> kube-scheduler [ac171ed99aa7] <==
	E1018 12:24:29.521551       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Namespace: namespaces is forbidden: User \"system:kube-scheduler\" cannot list resource \"namespaces\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Namespace"
	E1018 12:24:29.521602       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User \"system:kube-scheduler\" cannot list resource \"poddisruptionbudgets\" in API group \"policy\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.PodDisruptionBudget"
	E1018 12:24:29.521714       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User \"system:kube-scheduler\" cannot list resource \"replicasets\" in API group \"apps\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ReplicaSet"
	E1018 12:24:29.521771       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Service: services is forbidden: User \"system:kube-scheduler\" cannot list resource \"services\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Service"
	E1018 12:24:29.521820       1 reflector.go:205] "Failed to watch" err="failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User \"system:kube-scheduler\" cannot list resource \"statefulsets\" in API group \"apps\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.StatefulSet"
	E1018 12:24:30.388364       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ConfigMap: configmaps \"extension-apiserver-authentication\" is forbidden: User \"system:kube-scheduler\" cannot list resource \"configmaps\" in API group \"\" in the namespace \"kube-system\"" logger="UnhandledError" reflector="runtime/asm_amd64.s:1700" type="*v1.ConfigMap"
	E1018 12:24:30.423548       1 reflector.go:205] "Failed to watch" err="failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csinodes\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSINode"
	E1018 12:24:30.458398       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User \"system:kube-scheduler\" cannot list resource \"replicationcontrollers\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ReplicationController"
	E1018 12:24:30.471430       1 reflector.go:205] "Failed to watch" err="failed to list *v1.DeviceClass: deviceclasses.resource.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"deviceclasses\" in API group \"resource.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.DeviceClass"
	E1018 12:24:30.482651       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ResourceSlice: resourceslices.resource.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"resourceslices\" in API group \"resource.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ResourceSlice"
	E1018 12:24:30.502659       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Namespace: namespaces is forbidden: User \"system:kube-scheduler\" cannot list resource \"namespaces\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Namespace"
	E1018 12:24:30.602254       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User \"system:kube-scheduler\" cannot list resource \"persistentvolumes\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.PersistentVolume"
	E1018 12:24:30.613712       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ResourceClaim: resourceclaims.resource.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"resourceclaims\" in API group \"resource.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ResourceClaim"
	E1018 12:24:30.623631       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Service: services is forbidden: User \"system:kube-scheduler\" cannot list resource \"services\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Service"
	E1018 12:24:30.752533       1 reflector.go:205] "Failed to watch" err="failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csidrivers\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIDriver"
	E1018 12:24:30.774425       1 reflector.go:205] "Failed to watch" err="failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csistoragecapacities\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIStorageCapacity"
	E1018 12:24:30.882034       1 reflector.go:205] "Failed to watch" err="failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"storageclasses\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.StorageClass"
	E1018 12:24:30.922203       1 reflector.go:205] "Failed to watch" err="failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User \"system:kube-scheduler\" cannot list resource \"statefulsets\" in API group \"apps\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.StatefulSet"
	I1018 12:24:32.510730       1 shared_informer.go:356] "Caches are synced" controller="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I1018 12:25:00.227081       1 tlsconfig.go:258] "Shutting down DynamicServingCertificateController"
	I1018 12:25:00.227204       1 configmap_cafile_content.go:226] "Shutting down controller" name="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I1018 12:25:00.227889       1 secure_serving.go:259] Stopped listening on 127.0.0.1:10259
	I1018 12:25:00.228116       1 server.go:263] "[graceful-termination] secure server has stopped listening"
	I1018 12:25:00.228207       1 server.go:265] "[graceful-termination] secure server is exiting"
	E1018 12:25:00.228229       1 run.go:72] "command failed" err="finished without leader elect"
	
	
	==> kubelet <==
	Oct 18 12:26:48 default-k8s-diff-port-948988 kubelet[4182]: I1018 12:26:48.808146    4182 kubelet.go:3219] "Creating a mirror pod for static pod" pod="kube-system/etcd-default-k8s-diff-port-948988"
	Oct 18 12:26:48 default-k8s-diff-port-948988 kubelet[4182]: E1018 12:26:48.818965    4182 kubelet.go:3221] "Failed creating a mirror pod" err="pods \"etcd-default-k8s-diff-port-948988\" already exists" pod="kube-system/etcd-default-k8s-diff-port-948988"
	Oct 18 12:26:49 default-k8s-diff-port-948988 kubelet[4182]: I1018 12:26:49.224325    4182 apiserver.go:52] "Watching apiserver"
	Oct 18 12:26:49 default-k8s-diff-port-948988 kubelet[4182]: I1018 12:26:49.299725    4182 desired_state_of_world_populator.go:154] "Finished populating initial desired state of world"
	Oct 18 12:26:49 default-k8s-diff-port-948988 kubelet[4182]: I1018 12:26:49.334900    4182 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tmp\" (UniqueName: \"kubernetes.io/host-path/a2da4bd7-fb36-44bc-9e08-4ccbe934a19a-tmp\") pod \"storage-provisioner\" (UID: \"a2da4bd7-fb36-44bc-9e08-4ccbe934a19a\") " pod="kube-system/storage-provisioner"
	Oct 18 12:26:49 default-k8s-diff-port-948988 kubelet[4182]: I1018 12:26:49.335035    4182 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/6dd74255-86cf-46b6-a050-2d1ec343837e-xtables-lock\") pod \"kube-proxy-hmf6q\" (UID: \"6dd74255-86cf-46b6-a050-2d1ec343837e\") " pod="kube-system/kube-proxy-hmf6q"
	Oct 18 12:26:49 default-k8s-diff-port-948988 kubelet[4182]: I1018 12:26:49.335064    4182 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/6dd74255-86cf-46b6-a050-2d1ec343837e-lib-modules\") pod \"kube-proxy-hmf6q\" (UID: \"6dd74255-86cf-46b6-a050-2d1ec343837e\") " pod="kube-system/kube-proxy-hmf6q"
	Oct 18 12:26:49 default-k8s-diff-port-948988 kubelet[4182]: I1018 12:26:49.559117    4182 scope.go:117] "RemoveContainer" containerID="28ffefdfcaefaa0dcc5a6077bf470cdb9475d6e21b7a7d96be86de74a8777734"
	Oct 18 12:26:49 default-k8s-diff-port-948988 kubelet[4182]: E1018 12:26:49.584832    4182 log.go:32] "PullImage from image service failed" err="rpc error: code = Unknown desc = Error response from daemon: Get \"https://fake.domain/v2/\": dial tcp: lookup fake.domain on 192.168.122.1:53: no such host" image="fake.domain/registry.k8s.io/echoserver:1.4"
	Oct 18 12:26:49 default-k8s-diff-port-948988 kubelet[4182]: E1018 12:26:49.584904    4182 kuberuntime_image.go:43] "Failed to pull image" err="Error response from daemon: Get \"https://fake.domain/v2/\": dial tcp: lookup fake.domain on 192.168.122.1:53: no such host" image="fake.domain/registry.k8s.io/echoserver:1.4"
	Oct 18 12:26:49 default-k8s-diff-port-948988 kubelet[4182]: E1018 12:26:49.585150    4182 kuberuntime_manager.go:1449] "Unhandled Error" err="container metrics-server start failed in pod metrics-server-746fcd58dc-7788d_kube-system(482bf974-0dde-4e8e-abde-4c6a50f08c8d): ErrImagePull: Error response from daemon: Get \"https://fake.domain/v2/\": dial tcp: lookup fake.domain on 192.168.122.1:53: no such host" logger="UnhandledError"
	Oct 18 12:26:49 default-k8s-diff-port-948988 kubelet[4182]: E1018 12:26:49.585190    4182 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ErrImagePull: \"Error response from daemon: Get \\\"https://fake.domain/v2/\\\": dial tcp: lookup fake.domain on 192.168.122.1:53: no such host\"" pod="kube-system/metrics-server-746fcd58dc-7788d" podUID="482bf974-0dde-4e8e-abde-4c6a50f08c8d"
	Oct 18 12:26:49 default-k8s-diff-port-948988 kubelet[4182]: I1018 12:26:49.834067    4182 kubelet.go:3219] "Creating a mirror pod for static pod" pod="kube-system/kube-apiserver-default-k8s-diff-port-948988"
	Oct 18 12:26:49 default-k8s-diff-port-948988 kubelet[4182]: I1018 12:26:49.834883    4182 kubelet.go:3219] "Creating a mirror pod for static pod" pod="kube-system/etcd-default-k8s-diff-port-948988"
	Oct 18 12:26:49 default-k8s-diff-port-948988 kubelet[4182]: I1018 12:26:49.835048    4182 kubelet.go:3219] "Creating a mirror pod for static pod" pod="kube-system/kube-controller-manager-default-k8s-diff-port-948988"
	Oct 18 12:26:49 default-k8s-diff-port-948988 kubelet[4182]: I1018 12:26:49.835180    4182 kubelet.go:3219] "Creating a mirror pod for static pod" pod="kube-system/kube-scheduler-default-k8s-diff-port-948988"
	Oct 18 12:26:49 default-k8s-diff-port-948988 kubelet[4182]: I1018 12:26:49.835659    4182 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="26d8ce1671b6d868f5c427741052d8ba6bc2581e713fc06671798cbeaa0e2467"
	Oct 18 12:26:50 default-k8s-diff-port-948988 kubelet[4182]: E1018 12:26:50.457040    4182 kubelet.go:3221] "Failed creating a mirror pod" err="pods \"kube-scheduler-default-k8s-diff-port-948988\" already exists" pod="kube-system/kube-scheduler-default-k8s-diff-port-948988"
	Oct 18 12:26:50 default-k8s-diff-port-948988 kubelet[4182]: E1018 12:26:50.473284    4182 kubelet.go:3221] "Failed creating a mirror pod" err="pods \"kube-apiserver-default-k8s-diff-port-948988\" already exists" pod="kube-system/kube-apiserver-default-k8s-diff-port-948988"
	Oct 18 12:26:50 default-k8s-diff-port-948988 kubelet[4182]: E1018 12:26:50.474210    4182 kubelet.go:3221] "Failed creating a mirror pod" err="pods \"kube-controller-manager-default-k8s-diff-port-948988\" already exists" pod="kube-system/kube-controller-manager-default-k8s-diff-port-948988"
	Oct 18 12:26:50 default-k8s-diff-port-948988 kubelet[4182]: E1018 12:26:50.475377    4182 kubelet.go:3221] "Failed creating a mirror pod" err="pods \"etcd-default-k8s-diff-port-948988\" already exists" pod="kube-system/etcd-default-k8s-diff-port-948988"
	Oct 18 12:26:50 default-k8s-diff-port-948988 kubelet[4182]: E1018 12:26:50.587059    4182 log.go:32] "PullImage from image service failed" err="rpc error: code = Unknown desc = Docker Image Format v1 and Docker Image manifest version 2, schema 1 support has been removed. Suggest the author of registry.k8s.io/echoserver:1.4 to upgrade the image to the OCI Format or Docker Image manifest v2, schema 2. More information at https://docs.docker.com/go/deprecated-image-specs/" image="registry.k8s.io/echoserver:1.4"
	Oct 18 12:26:50 default-k8s-diff-port-948988 kubelet[4182]: E1018 12:26:50.587186    4182 kuberuntime_image.go:43] "Failed to pull image" err="Docker Image Format v1 and Docker Image manifest version 2, schema 1 support has been removed. Suggest the author of registry.k8s.io/echoserver:1.4 to upgrade the image to the OCI Format or Docker Image manifest v2, schema 2. More information at https://docs.docker.com/go/deprecated-image-specs/" image="registry.k8s.io/echoserver:1.4"
	Oct 18 12:26:50 default-k8s-diff-port-948988 kubelet[4182]: E1018 12:26:50.587563    4182 kuberuntime_manager.go:1449] "Unhandled Error" err="container dashboard-metrics-scraper start failed in pod dashboard-metrics-scraper-6ffb444bf9-gxs6s_kubernetes-dashboard(d9f0a621-1105-44d9-97ff-6ab18a09af31): ErrImagePull: Docker Image Format v1 and Docker Image manifest version 2, schema 1 support has been removed. Suggest the author of registry.k8s.io/echoserver:1.4 to upgrade the image to the OCI Format or Docker Image manifest v2, schema 2. More information at https://docs.docker.com/go/deprecated-image-specs/" logger="UnhandledError"
	Oct 18 12:26:50 default-k8s-diff-port-948988 kubelet[4182]: E1018 12:26:50.587744    4182 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with ErrImagePull: \"Docker Image Format v1 and Docker Image manifest version 2, schema 1 support has been removed. Suggest the author of registry.k8s.io/echoserver:1.4 to upgrade the image to the OCI Format or Docker Image manifest v2, schema 2. More information at https://docs.docker.com/go/deprecated-image-specs/\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-gxs6s" podUID="d9f0a621-1105-44d9-97ff-6ab18a09af31"
	
	
	==> kubernetes-dashboard [3a2c1a468e77] <==
	2025/10/18 12:26:02 Using namespace: kubernetes-dashboard
	2025/10/18 12:26:02 Using in-cluster config to connect to apiserver
	2025/10/18 12:26:02 Using secret token for csrf signing
	2025/10/18 12:26:02 Initializing csrf token from kubernetes-dashboard-csrf secret
	2025/10/18 12:26:02 Empty token. Generating and storing in a secret kubernetes-dashboard-csrf
	2025/10/18 12:26:02 Successful initial request to the apiserver, version: v1.34.1
	2025/10/18 12:26:02 Generating JWE encryption key
	2025/10/18 12:26:02 New synchronizer has been registered: kubernetes-dashboard-key-holder-kubernetes-dashboard. Starting
	2025/10/18 12:26:02 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kubernetes-dashboard
	2025/10/18 12:26:02 Initializing JWE encryption key from synchronized object
	2025/10/18 12:26:02 Creating in-cluster Sidecar client
	2025/10/18 12:26:02 Metric client health check failed: the server is currently unable to handle the request (get services dashboard-metrics-scraper). Retrying in 30 seconds.
	2025/10/18 12:26:02 Serving insecurely on HTTP port: 9090
	2025/10/18 12:26:47 Metric client health check failed: the server is currently unable to handle the request (get services dashboard-metrics-scraper). Retrying in 30 seconds.
	2025/10/18 12:26:02 Starting overwatch
	
	
	==> storage-provisioner [28ffefdfcaef] <==
	I1018 12:25:44.727571       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	F1018 12:26:14.742942       1 main.go:39] error getting server version: Get "https://10.96.0.1:443/version?timeout=32s": dial tcp 10.96.0.1:443: i/o timeout
	
	
	==> storage-provisioner [bffc61699957] <==
	I1018 12:26:50.783147       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	I1018 12:26:50.814482       1 storage_provisioner.go:141] Storage provisioner initialized, now starting service!
	I1018 12:26:50.815137       1 leaderelection.go:243] attempting to acquire leader lease kube-system/k8s.io-minikube-hostpath...
	W1018 12:26:50.821977       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1018 12:26:50.846621       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	I1018 12:26:50.847757       1 leaderelection.go:253] successfully acquired lease kube-system/k8s.io-minikube-hostpath
	I1018 12:26:50.849593       1 controller.go:835] Starting provisioner controller k8s.io/minikube-hostpath_default-k8s-diff-port-948988_d5651886-64a1-4b3a-a231-e6b997a61d94!
	I1018 12:26:50.847834       1 event.go:282] Event(v1.ObjectReference{Kind:"Endpoints", Namespace:"kube-system", Name:"k8s.io-minikube-hostpath", UID:"da8257ea-b806-4225-a5c2-05037be28c2a", APIVersion:"v1", ResourceVersion:"762", FieldPath:""}): type: 'Normal' reason: 'LeaderElection' default-k8s-diff-port-948988_d5651886-64a1-4b3a-a231-e6b997a61d94 became leader
	W1018 12:26:50.873898       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1018 12:26:50.904698       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	I1018 12:26:50.954115       1 controller.go:884] Started provisioner controller k8s.io/minikube-hostpath_default-k8s-diff-port-948988_d5651886-64a1-4b3a-a231-e6b997a61d94!
	W1018 12:26:52.910588       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1018 12:26:52.924501       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	

                                                
                                                
-- /stdout --
helpers_test.go:262: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p default-k8s-diff-port-948988 -n default-k8s-diff-port-948988
helpers_test.go:269: (dbg) Run:  kubectl --context default-k8s-diff-port-948988 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:280: non-running pods: metrics-server-746fcd58dc-7788d dashboard-metrics-scraper-6ffb444bf9-gxs6s
helpers_test.go:282: ======> post-mortem[TestStartStop/group/default-k8s-diff-port/serial/Pause]: describe non-running pods <======
helpers_test.go:285: (dbg) Run:  kubectl --context default-k8s-diff-port-948988 describe pod metrics-server-746fcd58dc-7788d dashboard-metrics-scraper-6ffb444bf9-gxs6s
helpers_test.go:285: (dbg) Non-zero exit: kubectl --context default-k8s-diff-port-948988 describe pod metrics-server-746fcd58dc-7788d dashboard-metrics-scraper-6ffb444bf9-gxs6s: exit status 1 (81.481248ms)

                                                
                                                
** stderr ** 
	Error from server (NotFound): pods "metrics-server-746fcd58dc-7788d" not found
	Error from server (NotFound): pods "dashboard-metrics-scraper-6ffb444bf9-gxs6s" not found

                                                
                                                
** /stderr **
helpers_test.go:287: kubectl --context default-k8s-diff-port-948988 describe pod metrics-server-746fcd58dc-7788d dashboard-metrics-scraper-6ffb444bf9-gxs6s: exit status 1
--- FAIL: TestStartStop/group/default-k8s-diff-port/serial/Pause (40.41s)

                                                
                                    

Test pass (310/345)

Order passed test Duration
3 TestDownloadOnly/v1.28.0/json-events 7.3
4 TestDownloadOnly/v1.28.0/preload-exists 0
8 TestDownloadOnly/v1.28.0/LogsDuration 0.07
9 TestDownloadOnly/v1.28.0/DeleteAll 0.15
10 TestDownloadOnly/v1.28.0/DeleteAlwaysSucceeds 0.14
12 TestDownloadOnly/v1.34.1/json-events 3.02
13 TestDownloadOnly/v1.34.1/preload-exists 0
17 TestDownloadOnly/v1.34.1/LogsDuration 0.07
18 TestDownloadOnly/v1.34.1/DeleteAll 0.15
19 TestDownloadOnly/v1.34.1/DeleteAlwaysSucceeds 0.14
21 TestBinaryMirror 0.67
22 TestOffline 110.83
25 TestAddons/PreSetup/EnablingAddonOnNonExistingCluster 0.06
26 TestAddons/PreSetup/DisablingAddonOnNonExistingCluster 0.06
27 TestAddons/Setup 207.42
29 TestAddons/serial/Volcano 43.52
31 TestAddons/serial/GCPAuth/Namespaces 0.13
32 TestAddons/serial/GCPAuth/FakeCredentials 10.63
35 TestAddons/parallel/Registry 18.26
36 TestAddons/parallel/RegistryCreds 0.73
37 TestAddons/parallel/Ingress 23.19
38 TestAddons/parallel/InspektorGadget 5.31
39 TestAddons/parallel/MetricsServer 6.22
41 TestAddons/parallel/CSI 46.05
42 TestAddons/parallel/Headlamp 19.37
43 TestAddons/parallel/CloudSpanner 6.6
44 TestAddons/parallel/LocalPath 10.05
45 TestAddons/parallel/NvidiaDevicePlugin 6.51
46 TestAddons/parallel/Yakd 12.25
48 TestAddons/StoppedEnableDisable 13.75
49 TestCertOptions 81.38
50 TestCertExpiration 310.24
51 TestDockerFlags 85.3
52 TestForceSystemdFlag 79.19
53 TestForceSystemdEnv 72.52
55 TestKVMDriverInstallOrUpdate 0.84
59 TestErrorSpam/setup 41.07
60 TestErrorSpam/start 0.37
61 TestErrorSpam/status 0.81
62 TestErrorSpam/pause 1.4
63 TestErrorSpam/unpause 1.66
64 TestErrorSpam/stop 5.34
67 TestFunctional/serial/CopySyncFile 0
68 TestFunctional/serial/StartWithProxy 88.17
69 TestFunctional/serial/AuditLog 0
70 TestFunctional/serial/SoftStart 67.67
71 TestFunctional/serial/KubeContext 0.05
72 TestFunctional/serial/KubectlGetPods 0.08
75 TestFunctional/serial/CacheCmd/cache/add_remote 2.34
76 TestFunctional/serial/CacheCmd/cache/add_local 1.31
77 TestFunctional/serial/CacheCmd/cache/CacheDelete 0.05
78 TestFunctional/serial/CacheCmd/cache/list 0.05
79 TestFunctional/serial/CacheCmd/cache/verify_cache_inside_node 0.23
80 TestFunctional/serial/CacheCmd/cache/cache_reload 1.2
81 TestFunctional/serial/CacheCmd/cache/delete 0.11
82 TestFunctional/serial/MinikubeKubectlCmd 0.11
83 TestFunctional/serial/MinikubeKubectlCmdDirectly 0.11
84 TestFunctional/serial/ExtraConfig 54.41
85 TestFunctional/serial/ComponentHealth 0.08
86 TestFunctional/serial/LogsCmd 1.08
87 TestFunctional/serial/LogsFileCmd 1.1
88 TestFunctional/serial/InvalidService 4.14
90 TestFunctional/parallel/ConfigCmd 0.37
91 TestFunctional/parallel/DashboardCmd 29.48
92 TestFunctional/parallel/DryRun 0.3
93 TestFunctional/parallel/InternationalLanguage 0.15
94 TestFunctional/parallel/StatusCmd 0.8
98 TestFunctional/parallel/ServiceCmdConnect 9.57
99 TestFunctional/parallel/AddonsCmd 0.14
100 TestFunctional/parallel/PersistentVolumeClaim 51.77
102 TestFunctional/parallel/SSHCmd 0.46
103 TestFunctional/parallel/CpCmd 1.41
104 TestFunctional/parallel/MySQL 37.31
105 TestFunctional/parallel/FileSync 0.24
106 TestFunctional/parallel/CertSync 1.29
110 TestFunctional/parallel/NodeLabels 0.09
112 TestFunctional/parallel/NonActiveRuntimeDisabled 0.21
114 TestFunctional/parallel/License 0.34
115 TestFunctional/parallel/ServiceCmd/DeployApp 10.21
125 TestFunctional/parallel/Version/short 0.06
126 TestFunctional/parallel/Version/components 0.53
127 TestFunctional/parallel/DockerEnv/bash 0.83
128 TestFunctional/parallel/UpdateContextCmd/no_changes 0.1
129 TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster 0.1
130 TestFunctional/parallel/UpdateContextCmd/no_clusters 0.09
131 TestFunctional/parallel/ProfileCmd/profile_not_create 0.35
132 TestFunctional/parallel/ProfileCmd/profile_list 0.33
133 TestFunctional/parallel/ProfileCmd/profile_json_output 0.33
134 TestFunctional/parallel/MountCmd/any-port 8.59
135 TestFunctional/parallel/ServiceCmd/List 0.33
136 TestFunctional/parallel/ImageCommands/ImageListShort 0.25
137 TestFunctional/parallel/ImageCommands/ImageListTable 0.22
138 TestFunctional/parallel/ImageCommands/ImageListJson 0.22
139 TestFunctional/parallel/ImageCommands/ImageListYaml 0.28
140 TestFunctional/parallel/ImageCommands/ImageBuild 4.88
141 TestFunctional/parallel/ImageCommands/Setup 1.54
142 TestFunctional/parallel/ServiceCmd/JSONOutput 0.27
143 TestFunctional/parallel/ServiceCmd/HTTPS 0.3
144 TestFunctional/parallel/ServiceCmd/Format 0.32
145 TestFunctional/parallel/ServiceCmd/URL 0.41
146 TestFunctional/parallel/ImageCommands/ImageLoadDaemon 1.26
147 TestFunctional/parallel/ImageCommands/ImageReloadDaemon 0.93
148 TestFunctional/parallel/ImageCommands/ImageTagAndLoadDaemon 1.55
149 TestFunctional/parallel/MountCmd/specific-port 2.02
150 TestFunctional/parallel/ImageCommands/ImageSaveToFile 0.43
151 TestFunctional/parallel/ImageCommands/ImageRemove 0.48
152 TestFunctional/parallel/ImageCommands/ImageLoadFromFile 0.92
153 TestFunctional/parallel/MountCmd/VerifyCleanup 1.37
154 TestFunctional/parallel/ImageCommands/ImageSaveDaemon 0.47
155 TestFunctional/delete_echo-server_images 0.04
156 TestFunctional/delete_my-image_image 0.02
157 TestFunctional/delete_minikube_cached_images 0.02
159 TestGvisorAddon 217.5
162 TestMultiControlPlane/serial/StartCluster 224.1
163 TestMultiControlPlane/serial/DeployApp 6.62
164 TestMultiControlPlane/serial/PingHostFromPods 1.38
165 TestMultiControlPlane/serial/AddWorkerNode 57.09
166 TestMultiControlPlane/serial/NodeLabels 0.08
167 TestMultiControlPlane/serial/HAppyAfterClusterStart 0.93
168 TestMultiControlPlane/serial/CopyFile 13.77
169 TestMultiControlPlane/serial/StopSecondaryNode 13.94
170 TestMultiControlPlane/serial/DegradedAfterControlPlaneNodeStop 0.71
171 TestMultiControlPlane/serial/RestartSecondaryNode 24.52
172 TestMultiControlPlane/serial/HAppyAfterSecondaryNodeRestart 1.03
173 TestMultiControlPlane/serial/RestartClusterKeepsNodes 175.42
174 TestMultiControlPlane/serial/DeleteSecondaryNode 7.87
175 TestMultiControlPlane/serial/DegradedAfterSecondaryNodeDelete 0.68
176 TestMultiControlPlane/serial/StopCluster 40.74
177 TestMultiControlPlane/serial/RestartCluster 120.64
178 TestMultiControlPlane/serial/DegradedAfterClusterRestart 0.68
179 TestMultiControlPlane/serial/AddSecondaryNode 83.46
180 TestMultiControlPlane/serial/HAppyAfterSecondaryNodeAdd 0.93
183 TestImageBuild/serial/Setup 43.45
184 TestImageBuild/serial/NormalBuild 1.54
185 TestImageBuild/serial/BuildWithBuildArg 0.96
186 TestImageBuild/serial/BuildWithDockerIgnore 0.78
187 TestImageBuild/serial/BuildWithSpecifiedDockerfile 0.74
191 TestJSONOutput/start/Command 88.46
192 TestJSONOutput/start/Audit 0
194 TestJSONOutput/start/parallel/DistinctCurrentSteps 0
195 TestJSONOutput/start/parallel/IncreasingCurrentSteps 0
197 TestJSONOutput/pause/Command 0.64
198 TestJSONOutput/pause/Audit 0
200 TestJSONOutput/pause/parallel/DistinctCurrentSteps 0
201 TestJSONOutput/pause/parallel/IncreasingCurrentSteps 0
203 TestJSONOutput/unpause/Command 0.62
204 TestJSONOutput/unpause/Audit 0
206 TestJSONOutput/unpause/parallel/DistinctCurrentSteps 0
207 TestJSONOutput/unpause/parallel/IncreasingCurrentSteps 0
209 TestJSONOutput/stop/Command 6.78
210 TestJSONOutput/stop/Audit 0
212 TestJSONOutput/stop/parallel/DistinctCurrentSteps 0
213 TestJSONOutput/stop/parallel/IncreasingCurrentSteps 0
214 TestErrorJSONOutput 0.22
219 TestMainNoArgs 0.05
220 TestMinikubeProfile 90.73
223 TestMountStart/serial/StartWithMountFirst 22.97
224 TestMountStart/serial/VerifyMountFirst 0.39
225 TestMountStart/serial/StartWithMountSecond 24.19
226 TestMountStart/serial/VerifyMountSecond 0.39
227 TestMountStart/serial/DeleteFirst 0.75
228 TestMountStart/serial/VerifyMountPostDelete 0.39
229 TestMountStart/serial/Stop 1.31
230 TestMountStart/serial/RestartStopped 21.74
231 TestMountStart/serial/VerifyMountPostStop 0.39
234 TestMultiNode/serial/FreshStart2Nodes 116.68
235 TestMultiNode/serial/DeployApp2Nodes 5.51
236 TestMultiNode/serial/PingHostFrom2Pods 0.91
237 TestMultiNode/serial/AddNode 50.77
238 TestMultiNode/serial/MultiNodeLabels 0.07
239 TestMultiNode/serial/ProfileList 0.62
240 TestMultiNode/serial/CopyFile 7.6
241 TestMultiNode/serial/StopNode 2.68
242 TestMultiNode/serial/StartAfterStop 39.77
243 TestMultiNode/serial/RestartKeepsNodes 179.69
244 TestMultiNode/serial/DeleteNode 2.44
245 TestMultiNode/serial/StopMultiNode 26.46
246 TestMultiNode/serial/RestartMultiNode 99.65
247 TestMultiNode/serial/ValidateNameConflict 43.51
252 TestPreload 163.48
254 TestScheduledStopUnix 116.71
255 TestSkaffold 127
258 TestRunningBinaryUpgrade 77.55
260 TestKubernetesUpgrade 203.16
269 TestStoppedBinaryUpgrade/Setup 0.63
270 TestStoppedBinaryUpgrade/Upgrade 162.75
271 TestStoppedBinaryUpgrade/MinikubeLogs 1.43
273 TestNoKubernetes/serial/StartNoK8sWithVersion 0.08
274 TestNoKubernetes/serial/StartWithK8s 85.46
276 TestPause/serial/Start 107
289 TestStartStop/group/old-k8s-version/serial/FirstStart 111.89
290 TestNoKubernetes/serial/StartWithStopK8s 33.81
291 TestNoKubernetes/serial/Start 24
292 TestNoKubernetes/serial/VerifyK8sNotRunning 0.21
293 TestNoKubernetes/serial/ProfileList 1.87
294 TestNoKubernetes/serial/Stop 1.34
295 TestNoKubernetes/serial/StartNoArgs 21.81
296 TestPause/serial/SecondStartNoReconfiguration 70.65
297 TestNoKubernetes/serial/VerifyK8sNotRunningSecond 0.22
299 TestStartStop/group/no-preload/serial/FirstStart 100.45
300 TestStartStop/group/old-k8s-version/serial/DeployApp 9.39
301 TestStartStop/group/old-k8s-version/serial/EnableAddonWhileActive 1.15
302 TestStartStop/group/old-k8s-version/serial/Stop 14.43
303 TestStartStop/group/old-k8s-version/serial/EnableAddonAfterStop 0.22
304 TestStartStop/group/old-k8s-version/serial/SecondStart 51.94
305 TestPause/serial/Pause 0.91
306 TestPause/serial/VerifyStatus 0.29
307 TestPause/serial/Unpause 0.68
308 TestPause/serial/PauseAgain 0.91
309 TestPause/serial/DeletePaused 0.89
310 TestPause/serial/VerifyDeletedResources 17.54
312 TestStartStop/group/embed-certs/serial/FirstStart 91.89
314 TestStartStop/group/default-k8s-diff-port/serial/FirstStart 72.93
315 TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop 8.01
316 TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop 5.11
317 TestStartStop/group/no-preload/serial/DeployApp 11.82
318 TestStartStop/group/old-k8s-version/serial/VerifyKubernetesImages 0.23
319 TestStartStop/group/old-k8s-version/serial/Pause 3.25
321 TestStartStop/group/newest-cni/serial/FirstStart 64.75
322 TestStartStop/group/no-preload/serial/EnableAddonWhileActive 1.06
323 TestStartStop/group/no-preload/serial/Stop 13.92
324 TestStartStop/group/no-preload/serial/EnableAddonAfterStop 0.25
325 TestStartStop/group/no-preload/serial/SecondStart 58.31
326 TestStartStop/group/default-k8s-diff-port/serial/DeployApp 10.35
327 TestStartStop/group/embed-certs/serial/DeployApp 9.37
328 TestStartStop/group/default-k8s-diff-port/serial/EnableAddonWhileActive 1.22
329 TestStartStop/group/default-k8s-diff-port/serial/Stop 13.7
330 TestStartStop/group/embed-certs/serial/EnableAddonWhileActive 1.27
331 TestStartStop/group/newest-cni/serial/DeployApp 0
332 TestStartStop/group/newest-cni/serial/EnableAddonWhileActive 1.18
333 TestStartStop/group/embed-certs/serial/Stop 12.3
334 TestStartStop/group/newest-cni/serial/Stop 13.04
335 TestStartStop/group/default-k8s-diff-port/serial/EnableAddonAfterStop 0.21
336 TestStartStop/group/default-k8s-diff-port/serial/SecondStart 44.31
337 TestStartStop/group/embed-certs/serial/EnableAddonAfterStop 0.23
338 TestStartStop/group/embed-certs/serial/SecondStart 64.79
339 TestStartStop/group/no-preload/serial/UserAppExistsAfterStop 7.01
340 TestStartStop/group/newest-cni/serial/EnableAddonAfterStop 0.19
341 TestStartStop/group/newest-cni/serial/SecondStart 102.28
342 TestStartStop/group/no-preload/serial/AddonExistsAfterStop 5.08
343 TestStartStop/group/no-preload/serial/VerifyKubernetesImages 0.22
344 TestStartStop/group/no-preload/serial/Pause 2.96
345 TestNetworkPlugins/group/auto/Start 120.02
346 TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop 11.01
347 TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop 5.1
348 TestStartStop/group/default-k8s-diff-port/serial/VerifyKubernetesImages 0.25
350 TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop 6.01
351 TestStartStop/group/embed-certs/serial/AddonExistsAfterStop 5.11
352 TestStartStop/group/embed-certs/serial/VerifyKubernetesImages 0.23
353 TestStartStop/group/embed-certs/serial/Pause 3.89
354 TestNetworkPlugins/group/kindnet/Start 75.36
355 TestNetworkPlugins/group/calico/Start 98.34
356 TestStartStop/group/newest-cni/serial/UserAppExistsAfterStop 0
357 TestStartStop/group/newest-cni/serial/AddonExistsAfterStop 0
358 TestStartStop/group/newest-cni/serial/VerifyKubernetesImages 0.3
359 TestStartStop/group/newest-cni/serial/Pause 2.91
360 TestNetworkPlugins/group/custom-flannel/Start 96.12
361 TestNetworkPlugins/group/auto/KubeletFlags 0.56
362 TestNetworkPlugins/group/auto/NetCatPod 12.47
363 TestNetworkPlugins/group/auto/DNS 0.21
364 TestNetworkPlugins/group/auto/Localhost 0.16
365 TestNetworkPlugins/group/auto/HairPin 0.16
366 TestNetworkPlugins/group/kindnet/ControllerPod 6.01
367 TestNetworkPlugins/group/kindnet/KubeletFlags 0.26
368 TestNetworkPlugins/group/kindnet/NetCatPod 12.33
369 TestNetworkPlugins/group/false/Start 92.93
370 TestNetworkPlugins/group/kindnet/DNS 0.22
371 TestNetworkPlugins/group/kindnet/Localhost 0.18
372 TestNetworkPlugins/group/kindnet/HairPin 0.19
373 TestNetworkPlugins/group/enable-default-cni/Start 90.84
374 TestNetworkPlugins/group/calico/ControllerPod 6.01
375 TestNetworkPlugins/group/calico/KubeletFlags 0.24
376 TestNetworkPlugins/group/calico/NetCatPod 14.32
377 TestNetworkPlugins/group/custom-flannel/KubeletFlags 0.23
378 TestNetworkPlugins/group/custom-flannel/NetCatPod 12.29
379 TestNetworkPlugins/group/custom-flannel/DNS 0.22
380 TestNetworkPlugins/group/custom-flannel/Localhost 0.17
381 TestNetworkPlugins/group/calico/DNS 0.21
382 TestNetworkPlugins/group/custom-flannel/HairPin 0.17
383 TestNetworkPlugins/group/calico/Localhost 0.2
384 TestNetworkPlugins/group/calico/HairPin 0.19
385 TestNetworkPlugins/group/flannel/Start 70.08
386 TestNetworkPlugins/group/bridge/Start 115
387 TestNetworkPlugins/group/false/KubeletFlags 0.28
388 TestNetworkPlugins/group/false/NetCatPod 13.33
389 TestNetworkPlugins/group/false/DNS 0.19
390 TestNetworkPlugins/group/false/Localhost 0.15
391 TestNetworkPlugins/group/false/HairPin 0.15
392 TestNetworkPlugins/group/enable-default-cni/KubeletFlags 0.29
393 TestNetworkPlugins/group/enable-default-cni/NetCatPod 13.36
394 TestNetworkPlugins/group/kubenet/Start 86.2
395 TestNetworkPlugins/group/enable-default-cni/DNS 0.22
396 TestNetworkPlugins/group/enable-default-cni/Localhost 0.18
397 TestNetworkPlugins/group/enable-default-cni/HairPin 0.18
398 TestNetworkPlugins/group/flannel/ControllerPod 6.01
399 TestNetworkPlugins/group/flannel/KubeletFlags 0.25
400 TestNetworkPlugins/group/flannel/NetCatPod 12.45
401 TestNetworkPlugins/group/flannel/DNS 0.37
402 TestNetworkPlugins/group/flannel/Localhost 0.15
403 TestNetworkPlugins/group/flannel/HairPin 0.15
404 TestNetworkPlugins/group/bridge/KubeletFlags 0.22
405 TestNetworkPlugins/group/bridge/NetCatPod 11.28
406 TestNetworkPlugins/group/bridge/DNS 0.16
407 TestNetworkPlugins/group/bridge/Localhost 0.14
408 TestNetworkPlugins/group/bridge/HairPin 0.14
409 TestNetworkPlugins/group/kubenet/KubeletFlags 0.22
410 TestNetworkPlugins/group/kubenet/NetCatPod 11.22
411 TestNetworkPlugins/group/kubenet/DNS 0.19
412 TestNetworkPlugins/group/kubenet/Localhost 0.13
413 TestNetworkPlugins/group/kubenet/HairPin 0.13
x
+
TestDownloadOnly/v1.28.0/json-events (7.3s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.28.0/json-events
aaa_download_only_test.go:80: (dbg) Run:  out/minikube-linux-amd64 start -o=json --download-only -p download-only-610526 --force --alsologtostderr --kubernetes-version=v1.28.0 --container-runtime=docker --driver=kvm2  --auto-update-drivers=false
aaa_download_only_test.go:80: (dbg) Done: out/minikube-linux-amd64 start -o=json --download-only -p download-only-610526 --force --alsologtostderr --kubernetes-version=v1.28.0 --container-runtime=docker --driver=kvm2  --auto-update-drivers=false: (7.303371954s)
--- PASS: TestDownloadOnly/v1.28.0/json-events (7.30s)

                                                
                                    
x
+
TestDownloadOnly/v1.28.0/preload-exists (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.28.0/preload-exists
I1018 11:29:13.683457    9909 preload.go:183] Checking if preload exists for k8s version v1.28.0 and runtime docker
I1018 11:29:13.683559    9909 preload.go:198] Found local preload: /home/jenkins/minikube-integration/21647-6010/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.28.0-docker-overlay2-amd64.tar.lz4
--- PASS: TestDownloadOnly/v1.28.0/preload-exists (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.28.0/LogsDuration (0.07s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.28.0/LogsDuration
aaa_download_only_test.go:183: (dbg) Run:  out/minikube-linux-amd64 logs -p download-only-610526
aaa_download_only_test.go:183: (dbg) Non-zero exit: out/minikube-linux-amd64 logs -p download-only-610526: exit status 85 (67.880573ms)

                                                
                                                
-- stdout --
	
	==> Audit <==
	┌─────────┬──────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┬──────────────────────┬─────────┬─────────┬─────────────────────┬──────────┐
	│ COMMAND │                                                                                     ARGS                                                                                     │       PROFILE        │  USER   │ VERSION │     START TIME      │ END TIME │
	├─────────┼──────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┼──────────────────────┼─────────┼─────────┼─────────────────────┼──────────┤
	│ start   │ -o=json --download-only -p download-only-610526 --force --alsologtostderr --kubernetes-version=v1.28.0 --container-runtime=docker --driver=kvm2  --auto-update-drivers=false │ download-only-610526 │ jenkins │ v1.37.0 │ 18 Oct 25 11:29 UTC │          │
	└─────────┴──────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┴──────────────────────┴─────────┴─────────┴─────────────────────┴──────────┘
	
	
	==> Last Start <==
	Log file created at: 2025/10/18 11:29:06
	Running on machine: ubuntu-20-agent-7
	Binary: Built with gc go1.24.6 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1018 11:29:06.426034    9921 out.go:360] Setting OutFile to fd 1 ...
	I1018 11:29:06.426361    9921 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1018 11:29:06.426372    9921 out.go:374] Setting ErrFile to fd 2...
	I1018 11:29:06.426377    9921 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1018 11:29:06.426582    9921 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21647-6010/.minikube/bin
	W1018 11:29:06.426730    9921 root.go:314] Error reading config file at /home/jenkins/minikube-integration/21647-6010/.minikube/config/config.json: open /home/jenkins/minikube-integration/21647-6010/.minikube/config/config.json: no such file or directory
	I1018 11:29:06.427472    9921 out.go:368] Setting JSON to true
	I1018 11:29:06.428424    9921 start.go:131] hostinfo: {"hostname":"ubuntu-20-agent-7","uptime":693,"bootTime":1760786253,"procs":190,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"22.04","kernelVersion":"6.8.0-1041-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I1018 11:29:06.428525    9921 start.go:141] virtualization: kvm guest
	I1018 11:29:06.431416    9921 out.go:99] [download-only-610526] minikube v1.37.0 on Ubuntu 22.04 (kvm/amd64)
	W1018 11:29:06.431615    9921 preload.go:349] Failed to list preload files: open /home/jenkins/minikube-integration/21647-6010/.minikube/cache/preloaded-tarball: no such file or directory
	I1018 11:29:06.431677    9921 notify.go:220] Checking for updates...
	I1018 11:29:06.433415    9921 out.go:171] MINIKUBE_LOCATION=21647
	I1018 11:29:06.435121    9921 out.go:171] MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1018 11:29:06.436688    9921 out.go:171] KUBECONFIG=/home/jenkins/minikube-integration/21647-6010/kubeconfig
	I1018 11:29:06.438265    9921 out.go:171] MINIKUBE_HOME=/home/jenkins/minikube-integration/21647-6010/.minikube
	I1018 11:29:06.440000    9921 out.go:171] MINIKUBE_BIN=out/minikube-linux-amd64
	W1018 11:29:06.442567    9921 out.go:336] minikube skips various validations when --force is supplied; this may lead to unexpected behavior
	I1018 11:29:06.442786    9921 driver.go:421] Setting default libvirt URI to qemu:///system
	I1018 11:29:07.009793    9921 out.go:99] Using the kvm2 driver based on user configuration
	I1018 11:29:07.009848    9921 start.go:305] selected driver: kvm2
	I1018 11:29:07.009856    9921 start.go:925] validating driver "kvm2" against <nil>
	I1018 11:29:07.010280    9921 install.go:66] acquiring lock: {Name:mk900956b073697a4aa6c80a27c6bb0742a99a53 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1018 11:29:07.010468    9921 install.go:138] Validating docker-machine-driver-kvm2, PATH=/home/jenkins/minikube-integration/21647-6010/.minikube/bin:/home/jenkins/workspace/KVM_Linux_integration/out/:/usr/local/bin:/usr/bin:/bin:/usr/local/games:/usr/games:/usr/local/go/bin:/home/jenkins/go/bin:/usr/local/bin/:/usr/local/go/bin/:/home/jenkins/go/bin
	I1018 11:29:07.026984    9921 install.go:163] /home/jenkins/workspace/KVM_Linux_integration/out/docker-machine-driver-kvm2 version is 1.37.0
	I1018 11:29:07.027020    9921 install.go:138] Validating docker-machine-driver-kvm2, PATH=/home/jenkins/minikube-integration/21647-6010/.minikube/bin:/home/jenkins/workspace/KVM_Linux_integration/out/:/usr/local/bin:/usr/bin:/bin:/usr/local/games:/usr/games:/usr/local/go/bin:/home/jenkins/go/bin:/usr/local/bin/:/usr/local/go/bin/:/home/jenkins/go/bin
	I1018 11:29:07.041630    9921 install.go:163] /home/jenkins/workspace/KVM_Linux_integration/out/docker-machine-driver-kvm2 version is 1.37.0
	I1018 11:29:07.041679    9921 start_flags.go:327] no existing cluster config was found, will generate one from the flags 
	I1018 11:29:07.042493    9921 start_flags.go:410] Using suggested 6144MB memory alloc based on sys=32093MB, container=0MB
	I1018 11:29:07.042717    9921 start_flags.go:974] Wait components to verify : map[apiserver:true system_pods:true]
	I1018 11:29:07.042748    9921 cni.go:84] Creating CNI manager for ""
	I1018 11:29:07.042812    9921 cni.go:158] "kvm2" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I1018 11:29:07.042827    9921 start_flags.go:336] Found "bridge CNI" CNI - setting NetworkPlugin=cni
	I1018 11:29:07.042887    9921 start.go:349] cluster config:
	{Name:download-only-610526 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760609789-21757@sha256:9824b20f4774128fcb298ad0e6cac7649729886cfba9d444b2305c743a5044c6 Memory:6144 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.28.0 ClusterName:download-only-610526 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local Cont
ainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.28.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1018 11:29:07.043126    9921 iso.go:125] acquiring lock: {Name:mk7b9977f44c882a06d0a932f05bd4c8e4cea871 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1018 11:29:07.045210    9921 out.go:99] Downloading VM boot image ...
	I1018 11:29:07.045251    9921 download.go:108] Downloading: https://storage.googleapis.com/minikube-builds/iso/21757/minikube-v1.37.0-1760609724-21757-amd64.iso?checksum=file:https://storage.googleapis.com/minikube-builds/iso/21757/minikube-v1.37.0-1760609724-21757-amd64.iso.sha256 -> /home/jenkins/minikube-integration/21647-6010/.minikube/cache/iso/amd64/minikube-v1.37.0-1760609724-21757-amd64.iso
	I1018 11:29:09.801947    9921 out.go:99] Starting "download-only-610526" primary control-plane node in "download-only-610526" cluster
	I1018 11:29:09.801982    9921 preload.go:183] Checking if preload exists for k8s version v1.28.0 and runtime docker
	I1018 11:29:09.824573    9921 preload.go:148] Found remote preload: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.28.0/preloaded-images-k8s-v18-v1.28.0-docker-overlay2-amd64.tar.lz4
	I1018 11:29:09.824611    9921 cache.go:58] Caching tarball of preloaded images
	I1018 11:29:09.824792    9921 preload.go:183] Checking if preload exists for k8s version v1.28.0 and runtime docker
	I1018 11:29:09.827165    9921 out.go:99] Downloading Kubernetes v1.28.0 preload ...
	I1018 11:29:09.827196    9921 preload.go:313] getting checksum for preloaded-images-k8s-v18-v1.28.0-docker-overlay2-amd64.tar.lz4 from gcs api...
	I1018 11:29:09.851964    9921 preload.go:290] Got checksum from GCS API "8a955be835827bc584bcce0658a7fcc9"
	I1018 11:29:09.852107    9921 download.go:108] Downloading: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.28.0/preloaded-images-k8s-v18-v1.28.0-docker-overlay2-amd64.tar.lz4?checksum=md5:8a955be835827bc584bcce0658a7fcc9 -> /home/jenkins/minikube-integration/21647-6010/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.28.0-docker-overlay2-amd64.tar.lz4
	
	
	* The control-plane node download-only-610526 host does not exist
	  To start a cluster, run: "minikube start -p download-only-610526"

                                                
                                                
-- /stdout --
aaa_download_only_test.go:184: minikube logs failed with error: exit status 85
--- PASS: TestDownloadOnly/v1.28.0/LogsDuration (0.07s)

                                                
                                    
x
+
TestDownloadOnly/v1.28.0/DeleteAll (0.15s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.28.0/DeleteAll
aaa_download_only_test.go:196: (dbg) Run:  out/minikube-linux-amd64 delete --all
--- PASS: TestDownloadOnly/v1.28.0/DeleteAll (0.15s)

                                                
                                    
x
+
TestDownloadOnly/v1.28.0/DeleteAlwaysSucceeds (0.14s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.28.0/DeleteAlwaysSucceeds
aaa_download_only_test.go:207: (dbg) Run:  out/minikube-linux-amd64 delete -p download-only-610526
--- PASS: TestDownloadOnly/v1.28.0/DeleteAlwaysSucceeds (0.14s)

                                                
                                    
x
+
TestDownloadOnly/v1.34.1/json-events (3.02s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.34.1/json-events
aaa_download_only_test.go:80: (dbg) Run:  out/minikube-linux-amd64 start -o=json --download-only -p download-only-754895 --force --alsologtostderr --kubernetes-version=v1.34.1 --container-runtime=docker --driver=kvm2  --auto-update-drivers=false
aaa_download_only_test.go:80: (dbg) Done: out/minikube-linux-amd64 start -o=json --download-only -p download-only-754895 --force --alsologtostderr --kubernetes-version=v1.34.1 --container-runtime=docker --driver=kvm2  --auto-update-drivers=false: (3.017186057s)
--- PASS: TestDownloadOnly/v1.34.1/json-events (3.02s)

                                                
                                    
x
+
TestDownloadOnly/v1.34.1/preload-exists (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.34.1/preload-exists
I1018 11:29:17.061459    9909 preload.go:183] Checking if preload exists for k8s version v1.34.1 and runtime docker
I1018 11:29:17.061506    9909 preload.go:198] Found local preload: /home/jenkins/minikube-integration/21647-6010/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-docker-overlay2-amd64.tar.lz4
--- PASS: TestDownloadOnly/v1.34.1/preload-exists (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.34.1/LogsDuration (0.07s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.34.1/LogsDuration
aaa_download_only_test.go:183: (dbg) Run:  out/minikube-linux-amd64 logs -p download-only-754895
aaa_download_only_test.go:183: (dbg) Non-zero exit: out/minikube-linux-amd64 logs -p download-only-754895: exit status 85 (67.043385ms)

                                                
                                                
-- stdout --
	
	==> Audit <==
	┌─────────┬──────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┬──────────────────────┬─────────┬─────────┬─────────────────────┬─────────────────────┐
	│ COMMAND │                                                                                     ARGS                                                                                     │       PROFILE        │  USER   │ VERSION │     START TIME      │      END TIME       │
	├─────────┼──────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┼──────────────────────┼─────────┼─────────┼─────────────────────┼─────────────────────┤
	│ start   │ -o=json --download-only -p download-only-610526 --force --alsologtostderr --kubernetes-version=v1.28.0 --container-runtime=docker --driver=kvm2  --auto-update-drivers=false │ download-only-610526 │ jenkins │ v1.37.0 │ 18 Oct 25 11:29 UTC │                     │
	│ delete  │ --all                                                                                                                                                                        │ minikube             │ jenkins │ v1.37.0 │ 18 Oct 25 11:29 UTC │ 18 Oct 25 11:29 UTC │
	│ delete  │ -p download-only-610526                                                                                                                                                      │ download-only-610526 │ jenkins │ v1.37.0 │ 18 Oct 25 11:29 UTC │ 18 Oct 25 11:29 UTC │
	│ start   │ -o=json --download-only -p download-only-754895 --force --alsologtostderr --kubernetes-version=v1.34.1 --container-runtime=docker --driver=kvm2  --auto-update-drivers=false │ download-only-754895 │ jenkins │ v1.37.0 │ 18 Oct 25 11:29 UTC │                     │
	└─────────┴──────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┴──────────────────────┴─────────┴─────────┴─────────────────────┴─────────────────────┘
	
	
	==> Last Start <==
	Log file created at: 2025/10/18 11:29:14
	Running on machine: ubuntu-20-agent-7
	Binary: Built with gc go1.24.6 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1018 11:29:14.087448   10122 out.go:360] Setting OutFile to fd 1 ...
	I1018 11:29:14.088150   10122 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1018 11:29:14.088163   10122 out.go:374] Setting ErrFile to fd 2...
	I1018 11:29:14.088167   10122 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1018 11:29:14.088394   10122 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21647-6010/.minikube/bin
	I1018 11:29:14.088883   10122 out.go:368] Setting JSON to true
	I1018 11:29:14.089733   10122 start.go:131] hostinfo: {"hostname":"ubuntu-20-agent-7","uptime":701,"bootTime":1760786253,"procs":173,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"22.04","kernelVersion":"6.8.0-1041-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I1018 11:29:14.089847   10122 start.go:141] virtualization: kvm guest
	I1018 11:29:14.092213   10122 out.go:99] [download-only-754895] minikube v1.37.0 on Ubuntu 22.04 (kvm/amd64)
	I1018 11:29:14.092452   10122 notify.go:220] Checking for updates...
	I1018 11:29:14.094268   10122 out.go:171] MINIKUBE_LOCATION=21647
	I1018 11:29:14.095920   10122 out.go:171] MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1018 11:29:14.097422   10122 out.go:171] KUBECONFIG=/home/jenkins/minikube-integration/21647-6010/kubeconfig
	I1018 11:29:14.099213   10122 out.go:171] MINIKUBE_HOME=/home/jenkins/minikube-integration/21647-6010/.minikube
	I1018 11:29:14.101039   10122 out.go:171] MINIKUBE_BIN=out/minikube-linux-amd64
	
	
	* The control-plane node download-only-754895 host does not exist
	  To start a cluster, run: "minikube start -p download-only-754895"

                                                
                                                
-- /stdout --
aaa_download_only_test.go:184: minikube logs failed with error: exit status 85
--- PASS: TestDownloadOnly/v1.34.1/LogsDuration (0.07s)

                                                
                                    
x
+
TestDownloadOnly/v1.34.1/DeleteAll (0.15s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.34.1/DeleteAll
aaa_download_only_test.go:196: (dbg) Run:  out/minikube-linux-amd64 delete --all
--- PASS: TestDownloadOnly/v1.34.1/DeleteAll (0.15s)

                                                
                                    
x
+
TestDownloadOnly/v1.34.1/DeleteAlwaysSucceeds (0.14s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.34.1/DeleteAlwaysSucceeds
aaa_download_only_test.go:207: (dbg) Run:  out/minikube-linux-amd64 delete -p download-only-754895
--- PASS: TestDownloadOnly/v1.34.1/DeleteAlwaysSucceeds (0.14s)

                                                
                                    
x
+
TestBinaryMirror (0.67s)

                                                
                                                
=== RUN   TestBinaryMirror
I1018 11:29:17.699537    9909 binary.go:74] Not caching binary, using https://dl.k8s.io/release/v1.34.1/bin/linux/amd64/kubectl?checksum=file:https://dl.k8s.io/release/v1.34.1/bin/linux/amd64/kubectl.sha256
aaa_download_only_test.go:309: (dbg) Run:  out/minikube-linux-amd64 start --download-only -p binary-mirror-818662 --alsologtostderr --binary-mirror http://127.0.0.1:33885 --driver=kvm2  --auto-update-drivers=false
helpers_test.go:175: Cleaning up "binary-mirror-818662" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p binary-mirror-818662
--- PASS: TestBinaryMirror (0.67s)

                                                
                                    
x
+
TestOffline (110.83s)

                                                
                                                
=== RUN   TestOffline
=== PAUSE TestOffline

                                                
                                                

                                                
                                                
=== CONT  TestOffline
aab_offline_test.go:55: (dbg) Run:  out/minikube-linux-amd64 start -p offline-docker-016788 --alsologtostderr -v=1 --memory=3072 --wait=true --driver=kvm2  --auto-update-drivers=false
aab_offline_test.go:55: (dbg) Done: out/minikube-linux-amd64 start -p offline-docker-016788 --alsologtostderr -v=1 --memory=3072 --wait=true --driver=kvm2  --auto-update-drivers=false: (1m49.952400682s)
helpers_test.go:175: Cleaning up "offline-docker-016788" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p offline-docker-016788
--- PASS: TestOffline (110.83s)

                                                
                                    
x
+
TestAddons/PreSetup/EnablingAddonOnNonExistingCluster (0.06s)

                                                
                                                
=== RUN   TestAddons/PreSetup/EnablingAddonOnNonExistingCluster
=== PAUSE TestAddons/PreSetup/EnablingAddonOnNonExistingCluster

                                                
                                                

                                                
                                                
=== CONT  TestAddons/PreSetup/EnablingAddonOnNonExistingCluster
addons_test.go:1000: (dbg) Run:  out/minikube-linux-amd64 addons enable dashboard -p addons-886198
addons_test.go:1000: (dbg) Non-zero exit: out/minikube-linux-amd64 addons enable dashboard -p addons-886198: exit status 85 (55.415532ms)

                                                
                                                
-- stdout --
	* Profile "addons-886198" not found. Run "minikube profile list" to view all profiles.
	  To start a cluster, run: "minikube start -p addons-886198"

                                                
                                                
-- /stdout --
--- PASS: TestAddons/PreSetup/EnablingAddonOnNonExistingCluster (0.06s)

                                                
                                    
x
+
TestAddons/PreSetup/DisablingAddonOnNonExistingCluster (0.06s)

                                                
                                                
=== RUN   TestAddons/PreSetup/DisablingAddonOnNonExistingCluster
=== PAUSE TestAddons/PreSetup/DisablingAddonOnNonExistingCluster

                                                
                                                

                                                
                                                
=== CONT  TestAddons/PreSetup/DisablingAddonOnNonExistingCluster
addons_test.go:1011: (dbg) Run:  out/minikube-linux-amd64 addons disable dashboard -p addons-886198
addons_test.go:1011: (dbg) Non-zero exit: out/minikube-linux-amd64 addons disable dashboard -p addons-886198: exit status 85 (64.774333ms)

                                                
                                                
-- stdout --
	* Profile "addons-886198" not found. Run "minikube profile list" to view all profiles.
	  To start a cluster, run: "minikube start -p addons-886198"

                                                
                                                
-- /stdout --
--- PASS: TestAddons/PreSetup/DisablingAddonOnNonExistingCluster (0.06s)

                                                
                                    
x
+
TestAddons/Setup (207.42s)

                                                
                                                
=== RUN   TestAddons/Setup
addons_test.go:108: (dbg) Run:  out/minikube-linux-amd64 start -p addons-886198 --wait=true --memory=4096 --alsologtostderr --addons=registry --addons=registry-creds --addons=metrics-server --addons=volumesnapshots --addons=csi-hostpath-driver --addons=gcp-auth --addons=cloud-spanner --addons=inspektor-gadget --addons=nvidia-device-plugin --addons=yakd --addons=volcano --addons=amd-gpu-device-plugin --driver=kvm2  --auto-update-drivers=false --addons=ingress --addons=ingress-dns --addons=storage-provisioner-rancher
addons_test.go:108: (dbg) Done: out/minikube-linux-amd64 start -p addons-886198 --wait=true --memory=4096 --alsologtostderr --addons=registry --addons=registry-creds --addons=metrics-server --addons=volumesnapshots --addons=csi-hostpath-driver --addons=gcp-auth --addons=cloud-spanner --addons=inspektor-gadget --addons=nvidia-device-plugin --addons=yakd --addons=volcano --addons=amd-gpu-device-plugin --driver=kvm2  --auto-update-drivers=false --addons=ingress --addons=ingress-dns --addons=storage-provisioner-rancher: (3m27.421398599s)
--- PASS: TestAddons/Setup (207.42s)

                                                
                                    
x
+
TestAddons/serial/Volcano (43.52s)

                                                
                                                
=== RUN   TestAddons/serial/Volcano
addons_test.go:884: volcano-controller stabilized in 28.07023ms
addons_test.go:868: volcano-scheduler stabilized in 28.107851ms
addons_test.go:876: volcano-admission stabilized in 28.542893ms
addons_test.go:890: (dbg) TestAddons/serial/Volcano: waiting 6m0s for pods matching "app=volcano-scheduler" in namespace "volcano-system" ...
helpers_test.go:352: "volcano-scheduler-76c996c8bf-kx57n" [db607a86-572f-40de-9a50-4e7faee6743f] Running
addons_test.go:890: (dbg) TestAddons/serial/Volcano: app=volcano-scheduler healthy within 6.00485511s
addons_test.go:894: (dbg) TestAddons/serial/Volcano: waiting 6m0s for pods matching "app=volcano-admission" in namespace "volcano-system" ...
helpers_test.go:352: "volcano-admission-6c447bd768-dvc27" [9cd341c1-0e44-42dc-a5d2-7850d230fa56] Running
addons_test.go:894: (dbg) TestAddons/serial/Volcano: app=volcano-admission healthy within 5.004978066s
addons_test.go:898: (dbg) TestAddons/serial/Volcano: waiting 6m0s for pods matching "app=volcano-controller" in namespace "volcano-system" ...
helpers_test.go:352: "volcano-controllers-6fd4f85cb8-4rdwz" [a5f9ce95-72ed-4537-99d9-c24a90cb9683] Running
addons_test.go:898: (dbg) TestAddons/serial/Volcano: app=volcano-controller healthy within 5.004250175s
addons_test.go:903: (dbg) Run:  kubectl --context addons-886198 delete -n volcano-system job volcano-admission-init
addons_test.go:909: (dbg) Run:  kubectl --context addons-886198 create -f testdata/vcjob.yaml
addons_test.go:917: (dbg) Run:  kubectl --context addons-886198 get vcjob -n my-volcano
addons_test.go:935: (dbg) TestAddons/serial/Volcano: waiting 3m0s for pods matching "volcano.sh/job-name=test-job" in namespace "my-volcano" ...
helpers_test.go:352: "test-job-nginx-0" [a9d15a3d-870f-4f65-bd23-fb01fd7da927] Pending
helpers_test.go:352: "test-job-nginx-0" [a9d15a3d-870f-4f65-bd23-fb01fd7da927] Pending / Ready:ContainersNotReady (containers with unready status: [nginx]) / ContainersReady:ContainersNotReady (containers with unready status: [nginx])
helpers_test.go:352: "test-job-nginx-0" [a9d15a3d-870f-4f65-bd23-fb01fd7da927] Running
addons_test.go:935: (dbg) TestAddons/serial/Volcano: volcano.sh/job-name=test-job healthy within 15.004705584s
addons_test.go:1053: (dbg) Run:  out/minikube-linux-amd64 -p addons-886198 addons disable volcano --alsologtostderr -v=1
addons_test.go:1053: (dbg) Done: out/minikube-linux-amd64 -p addons-886198 addons disable volcano --alsologtostderr -v=1: (12.012732386s)
--- PASS: TestAddons/serial/Volcano (43.52s)

                                                
                                    
x
+
TestAddons/serial/GCPAuth/Namespaces (0.13s)

                                                
                                                
=== RUN   TestAddons/serial/GCPAuth/Namespaces
addons_test.go:630: (dbg) Run:  kubectl --context addons-886198 create ns new-namespace
addons_test.go:644: (dbg) Run:  kubectl --context addons-886198 get secret gcp-auth -n new-namespace
--- PASS: TestAddons/serial/GCPAuth/Namespaces (0.13s)

                                                
                                    
x
+
TestAddons/serial/GCPAuth/FakeCredentials (10.63s)

                                                
                                                
=== RUN   TestAddons/serial/GCPAuth/FakeCredentials
addons_test.go:675: (dbg) Run:  kubectl --context addons-886198 create -f testdata/busybox.yaml
addons_test.go:682: (dbg) Run:  kubectl --context addons-886198 create sa gcp-auth-test
addons_test.go:688: (dbg) TestAddons/serial/GCPAuth/FakeCredentials: waiting 8m0s for pods matching "integration-test=busybox" in namespace "default" ...
helpers_test.go:352: "busybox" [dbd9ff0c-356d-4763-ac45-73a36cd8c84d] Pending / Ready:ContainersNotReady (containers with unready status: [busybox]) / ContainersReady:ContainersNotReady (containers with unready status: [busybox])
helpers_test.go:352: "busybox" [dbd9ff0c-356d-4763-ac45-73a36cd8c84d] Running
addons_test.go:688: (dbg) TestAddons/serial/GCPAuth/FakeCredentials: integration-test=busybox healthy within 10.004089447s
addons_test.go:694: (dbg) Run:  kubectl --context addons-886198 exec busybox -- /bin/sh -c "printenv GOOGLE_APPLICATION_CREDENTIALS"
addons_test.go:706: (dbg) Run:  kubectl --context addons-886198 describe sa gcp-auth-test
addons_test.go:744: (dbg) Run:  kubectl --context addons-886198 exec busybox -- /bin/sh -c "printenv GOOGLE_CLOUD_PROJECT"
--- PASS: TestAddons/serial/GCPAuth/FakeCredentials (10.63s)

                                                
                                    
x
+
TestAddons/parallel/Registry (18.26s)

                                                
                                                
=== RUN   TestAddons/parallel/Registry
=== PAUSE TestAddons/parallel/Registry

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/Registry
addons_test.go:382: registry stabilized in 15.285356ms
addons_test.go:384: (dbg) TestAddons/parallel/Registry: waiting 6m0s for pods matching "actual-registry=true" in namespace "kube-system" ...
helpers_test.go:352: "registry-6b586f9694-xcv2w" [18d70587-bc13-43be-a4c0-e97535a60610] Running
addons_test.go:384: (dbg) TestAddons/parallel/Registry: actual-registry=true healthy within 6.004129437s
addons_test.go:387: (dbg) TestAddons/parallel/Registry: waiting 10m0s for pods matching "registry-proxy=true" in namespace "kube-system" ...
helpers_test.go:352: "registry-proxy-47rq5" [35044b00-b1e0-46d0-9e10-1b2664c96323] Running
addons_test.go:387: (dbg) TestAddons/parallel/Registry: registry-proxy=true healthy within 5.007444282s
addons_test.go:392: (dbg) Run:  kubectl --context addons-886198 delete po -l run=registry-test --now
addons_test.go:397: (dbg) Run:  kubectl --context addons-886198 run --rm registry-test --restart=Never --image=gcr.io/k8s-minikube/busybox -it -- sh -c "wget --spider -S http://registry.kube-system.svc.cluster.local"
addons_test.go:397: (dbg) Done: kubectl --context addons-886198 run --rm registry-test --restart=Never --image=gcr.io/k8s-minikube/busybox -it -- sh -c "wget --spider -S http://registry.kube-system.svc.cluster.local": (6.183391628s)
addons_test.go:411: (dbg) Run:  out/minikube-linux-amd64 -p addons-886198 ip
addons_test.go:1053: (dbg) Run:  out/minikube-linux-amd64 -p addons-886198 addons disable registry --alsologtostderr -v=1
--- PASS: TestAddons/parallel/Registry (18.26s)

                                                
                                    
x
+
TestAddons/parallel/RegistryCreds (0.73s)

                                                
                                                
=== RUN   TestAddons/parallel/RegistryCreds
=== PAUSE TestAddons/parallel/RegistryCreds

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/RegistryCreds
addons_test.go:323: registry-creds stabilized in 4.138895ms
addons_test.go:325: (dbg) Run:  out/minikube-linux-amd64 addons configure registry-creds -f ./testdata/addons_testconfig.json -p addons-886198
addons_test.go:332: (dbg) Run:  kubectl --context addons-886198 -n kube-system get secret -o yaml
addons_test.go:1053: (dbg) Run:  out/minikube-linux-amd64 -p addons-886198 addons disable registry-creds --alsologtostderr -v=1
--- PASS: TestAddons/parallel/RegistryCreds (0.73s)

                                                
                                    
x
+
TestAddons/parallel/Ingress (23.19s)

                                                
                                                
=== RUN   TestAddons/parallel/Ingress
=== PAUSE TestAddons/parallel/Ingress

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/Ingress
addons_test.go:209: (dbg) Run:  kubectl --context addons-886198 wait --for=condition=ready --namespace=ingress-nginx pod --selector=app.kubernetes.io/component=controller --timeout=90s
addons_test.go:234: (dbg) Run:  kubectl --context addons-886198 replace --force -f testdata/nginx-ingress-v1.yaml
addons_test.go:247: (dbg) Run:  kubectl --context addons-886198 replace --force -f testdata/nginx-pod-svc.yaml
addons_test.go:252: (dbg) TestAddons/parallel/Ingress: waiting 8m0s for pods matching "run=nginx" in namespace "default" ...
helpers_test.go:352: "nginx" [034c341f-c545-4476-8779-40b5fa0056a0] Pending / Ready:ContainersNotReady (containers with unready status: [nginx]) / ContainersReady:ContainersNotReady (containers with unready status: [nginx])
helpers_test.go:352: "nginx" [034c341f-c545-4476-8779-40b5fa0056a0] Running
addons_test.go:252: (dbg) TestAddons/parallel/Ingress: run=nginx healthy within 12.005749104s
I1018 11:34:21.103368    9909 kapi.go:150] Service nginx in namespace default found.
addons_test.go:264: (dbg) Run:  out/minikube-linux-amd64 -p addons-886198 ssh "curl -s http://127.0.0.1/ -H 'Host: nginx.example.com'"
addons_test.go:288: (dbg) Run:  kubectl --context addons-886198 replace --force -f testdata/ingress-dns-example-v1.yaml
addons_test.go:293: (dbg) Run:  out/minikube-linux-amd64 -p addons-886198 ip
addons_test.go:299: (dbg) Run:  nslookup hello-john.test 192.168.39.191
addons_test.go:1053: (dbg) Run:  out/minikube-linux-amd64 -p addons-886198 addons disable ingress-dns --alsologtostderr -v=1
addons_test.go:1053: (dbg) Done: out/minikube-linux-amd64 -p addons-886198 addons disable ingress-dns --alsologtostderr -v=1: (1.918318268s)
addons_test.go:1053: (dbg) Run:  out/minikube-linux-amd64 -p addons-886198 addons disable ingress --alsologtostderr -v=1
addons_test.go:1053: (dbg) Done: out/minikube-linux-amd64 -p addons-886198 addons disable ingress --alsologtostderr -v=1: (7.920169443s)
--- PASS: TestAddons/parallel/Ingress (23.19s)

                                                
                                    
x
+
TestAddons/parallel/InspektorGadget (5.31s)

                                                
                                                
=== RUN   TestAddons/parallel/InspektorGadget
=== PAUSE TestAddons/parallel/InspektorGadget

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/InspektorGadget
addons_test.go:823: (dbg) TestAddons/parallel/InspektorGadget: waiting 8m0s for pods matching "k8s-app=gadget" in namespace "gadget" ...
helpers_test.go:352: "gadget-cq4qf" [ece13bd0-8bfd-4f96-9350-f086e628faf7] Running
addons_test.go:823: (dbg) TestAddons/parallel/InspektorGadget: k8s-app=gadget healthy within 5.086501131s
addons_test.go:1053: (dbg) Run:  out/minikube-linux-amd64 -p addons-886198 addons disable inspektor-gadget --alsologtostderr -v=1
--- PASS: TestAddons/parallel/InspektorGadget (5.31s)

                                                
                                    
x
+
TestAddons/parallel/MetricsServer (6.22s)

                                                
                                                
=== RUN   TestAddons/parallel/MetricsServer
=== PAUSE TestAddons/parallel/MetricsServer

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/MetricsServer
addons_test.go:455: metrics-server stabilized in 9.95218ms
addons_test.go:457: (dbg) TestAddons/parallel/MetricsServer: waiting 6m0s for pods matching "k8s-app=metrics-server" in namespace "kube-system" ...
helpers_test.go:352: "metrics-server-85b7d694d7-q2bx7" [75331698-4d8a-41e1-a947-1daa223882d9] Running
addons_test.go:457: (dbg) TestAddons/parallel/MetricsServer: k8s-app=metrics-server healthy within 5.01033201s
addons_test.go:463: (dbg) Run:  kubectl --context addons-886198 top pods -n kube-system
addons_test.go:1053: (dbg) Run:  out/minikube-linux-amd64 -p addons-886198 addons disable metrics-server --alsologtostderr -v=1
addons_test.go:1053: (dbg) Done: out/minikube-linux-amd64 -p addons-886198 addons disable metrics-server --alsologtostderr -v=1: (1.088376478s)
--- PASS: TestAddons/parallel/MetricsServer (6.22s)

                                                
                                    
x
+
TestAddons/parallel/CSI (46.05s)

                                                
                                                
=== RUN   TestAddons/parallel/CSI
=== PAUSE TestAddons/parallel/CSI

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/CSI
I1018 11:34:01.508142    9909 kapi.go:75] Waiting for pod with label "kubernetes.io/minikube-addons=csi-hostpath-driver" in ns "kube-system" ...
I1018 11:34:01.515400    9909 kapi.go:86] Found 3 Pods for label selector kubernetes.io/minikube-addons=csi-hostpath-driver
I1018 11:34:01.515427    9909 kapi.go:107] duration metric: took 7.302005ms to wait for kubernetes.io/minikube-addons=csi-hostpath-driver ...
addons_test.go:549: csi-hostpath-driver pods stabilized in 7.310696ms
addons_test.go:552: (dbg) Run:  kubectl --context addons-886198 create -f testdata/csi-hostpath-driver/pvc.yaml
addons_test.go:557: (dbg) TestAddons/parallel/CSI: waiting 6m0s for pvc "hpvc" in namespace "default" ...
helpers_test.go:402: (dbg) Run:  kubectl --context addons-886198 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-886198 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-886198 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-886198 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-886198 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-886198 get pvc hpvc -o jsonpath={.status.phase} -n default
2025/10/18 11:34:06 [DEBUG] GET http://192.168.39.191:5000
helpers_test.go:402: (dbg) Run:  kubectl --context addons-886198 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-886198 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-886198 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-886198 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-886198 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-886198 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-886198 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-886198 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-886198 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-886198 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-886198 get pvc hpvc -o jsonpath={.status.phase} -n default
addons_test.go:562: (dbg) Run:  kubectl --context addons-886198 create -f testdata/csi-hostpath-driver/pv-pod.yaml
addons_test.go:567: (dbg) TestAddons/parallel/CSI: waiting 6m0s for pods matching "app=task-pv-pod" in namespace "default" ...
helpers_test.go:352: "task-pv-pod" [07a284e2-10ab-490f-8e84-d301ead6b36d] Pending
helpers_test.go:352: "task-pv-pod" [07a284e2-10ab-490f-8e84-d301ead6b36d] Pending / Ready:ContainersNotReady (containers with unready status: [task-pv-container]) / ContainersReady:ContainersNotReady (containers with unready status: [task-pv-container])
helpers_test.go:352: "task-pv-pod" [07a284e2-10ab-490f-8e84-d301ead6b36d] Running
addons_test.go:567: (dbg) TestAddons/parallel/CSI: app=task-pv-pod healthy within 7.015905283s
addons_test.go:572: (dbg) Run:  kubectl --context addons-886198 create -f testdata/csi-hostpath-driver/snapshot.yaml
addons_test.go:577: (dbg) TestAddons/parallel/CSI: waiting 6m0s for volume snapshot "new-snapshot-demo" in namespace "default" ...
helpers_test.go:427: (dbg) Run:  kubectl --context addons-886198 get volumesnapshot new-snapshot-demo -o jsonpath={.status.readyToUse} -n default
helpers_test.go:435: TestAddons/parallel/CSI: WARNING: volume snapshot get for "default" "new-snapshot-demo" returned: 
helpers_test.go:427: (dbg) Run:  kubectl --context addons-886198 get volumesnapshot new-snapshot-demo -o jsonpath={.status.readyToUse} -n default
addons_test.go:582: (dbg) Run:  kubectl --context addons-886198 delete pod task-pv-pod
addons_test.go:588: (dbg) Run:  kubectl --context addons-886198 delete pvc hpvc
addons_test.go:594: (dbg) Run:  kubectl --context addons-886198 create -f testdata/csi-hostpath-driver/pvc-restore.yaml
addons_test.go:599: (dbg) TestAddons/parallel/CSI: waiting 6m0s for pvc "hpvc-restore" in namespace "default" ...
helpers_test.go:402: (dbg) Run:  kubectl --context addons-886198 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-886198 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-886198 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-886198 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-886198 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
addons_test.go:604: (dbg) Run:  kubectl --context addons-886198 create -f testdata/csi-hostpath-driver/pv-pod-restore.yaml
addons_test.go:609: (dbg) TestAddons/parallel/CSI: waiting 6m0s for pods matching "app=task-pv-pod-restore" in namespace "default" ...
helpers_test.go:352: "task-pv-pod-restore" [dd4e7d74-6ba6-4156-8ba8-bba7b9c93c13] Pending
helpers_test.go:352: "task-pv-pod-restore" [dd4e7d74-6ba6-4156-8ba8-bba7b9c93c13] Pending / Ready:ContainersNotReady (containers with unready status: [task-pv-container]) / ContainersReady:ContainersNotReady (containers with unready status: [task-pv-container])
helpers_test.go:352: "task-pv-pod-restore" [dd4e7d74-6ba6-4156-8ba8-bba7b9c93c13] Running
addons_test.go:609: (dbg) TestAddons/parallel/CSI: app=task-pv-pod-restore healthy within 7.004657754s
addons_test.go:614: (dbg) Run:  kubectl --context addons-886198 delete pod task-pv-pod-restore
addons_test.go:618: (dbg) Run:  kubectl --context addons-886198 delete pvc hpvc-restore
addons_test.go:622: (dbg) Run:  kubectl --context addons-886198 delete volumesnapshot new-snapshot-demo
addons_test.go:1053: (dbg) Run:  out/minikube-linux-amd64 -p addons-886198 addons disable volumesnapshots --alsologtostderr -v=1
addons_test.go:1053: (dbg) Run:  out/minikube-linux-amd64 -p addons-886198 addons disable csi-hostpath-driver --alsologtostderr -v=1
addons_test.go:1053: (dbg) Done: out/minikube-linux-amd64 -p addons-886198 addons disable csi-hostpath-driver --alsologtostderr -v=1: (7.029158018s)
--- PASS: TestAddons/parallel/CSI (46.05s)

                                                
                                    
x
+
TestAddons/parallel/Headlamp (19.37s)

                                                
                                                
=== RUN   TestAddons/parallel/Headlamp
=== PAUSE TestAddons/parallel/Headlamp

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/Headlamp
addons_test.go:808: (dbg) Run:  out/minikube-linux-amd64 addons enable headlamp -p addons-886198 --alsologtostderr -v=1
addons_test.go:808: (dbg) Done: out/minikube-linux-amd64 addons enable headlamp -p addons-886198 --alsologtostderr -v=1: (1.180487146s)
addons_test.go:813: (dbg) TestAddons/parallel/Headlamp: waiting 8m0s for pods matching "app.kubernetes.io/name=headlamp" in namespace "headlamp" ...
helpers_test.go:352: "headlamp-6945c6f4d-v7knk" [cd50a02f-575e-48de-acba-782750f56462] Pending
helpers_test.go:352: "headlamp-6945c6f4d-v7knk" [cd50a02f-575e-48de-acba-782750f56462] Pending / Ready:ContainersNotReady (containers with unready status: [headlamp]) / ContainersReady:ContainersNotReady (containers with unready status: [headlamp])
helpers_test.go:352: "headlamp-6945c6f4d-v7knk" [cd50a02f-575e-48de-acba-782750f56462] Running
addons_test.go:813: (dbg) TestAddons/parallel/Headlamp: app.kubernetes.io/name=headlamp healthy within 18.005672349s
addons_test.go:1053: (dbg) Run:  out/minikube-linux-amd64 -p addons-886198 addons disable headlamp --alsologtostderr -v=1
--- PASS: TestAddons/parallel/Headlamp (19.37s)

                                                
                                    
x
+
TestAddons/parallel/CloudSpanner (6.6s)

                                                
                                                
=== RUN   TestAddons/parallel/CloudSpanner
=== PAUSE TestAddons/parallel/CloudSpanner

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/CloudSpanner
addons_test.go:840: (dbg) TestAddons/parallel/CloudSpanner: waiting 6m0s for pods matching "app=cloud-spanner-emulator" in namespace "default" ...
helpers_test.go:352: "cloud-spanner-emulator-86bd5cbb97-6d4nw" [10c2a585-10af-48d7-b9c6-b5a6e3b97633] Running
addons_test.go:840: (dbg) TestAddons/parallel/CloudSpanner: app=cloud-spanner-emulator healthy within 6.007156051s
addons_test.go:1053: (dbg) Run:  out/minikube-linux-amd64 -p addons-886198 addons disable cloud-spanner --alsologtostderr -v=1
--- PASS: TestAddons/parallel/CloudSpanner (6.60s)

                                                
                                    
x
+
TestAddons/parallel/LocalPath (10.05s)

                                                
                                                
=== RUN   TestAddons/parallel/LocalPath
=== PAUSE TestAddons/parallel/LocalPath

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/LocalPath
addons_test.go:949: (dbg) Run:  kubectl --context addons-886198 apply -f testdata/storage-provisioner-rancher/pvc.yaml
addons_test.go:955: (dbg) Run:  kubectl --context addons-886198 apply -f testdata/storage-provisioner-rancher/pod.yaml
addons_test.go:959: (dbg) TestAddons/parallel/LocalPath: waiting 5m0s for pvc "test-pvc" in namespace "default" ...
helpers_test.go:402: (dbg) Run:  kubectl --context addons-886198 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-886198 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-886198 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-886198 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-886198 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-886198 get pvc test-pvc -o jsonpath={.status.phase} -n default
addons_test.go:962: (dbg) TestAddons/parallel/LocalPath: waiting 3m0s for pods matching "run=test-local-path" in namespace "default" ...
helpers_test.go:352: "test-local-path" [d8015260-82fb-4b8f-8f9e-02d4ec6b47bc] Pending / Ready:ContainersNotReady (containers with unready status: [busybox]) / ContainersReady:ContainersNotReady (containers with unready status: [busybox])
helpers_test.go:352: "test-local-path" [d8015260-82fb-4b8f-8f9e-02d4ec6b47bc] Pending / Initialized:PodCompleted / Ready:PodCompleted / ContainersReady:PodCompleted
helpers_test.go:352: "test-local-path" [d8015260-82fb-4b8f-8f9e-02d4ec6b47bc] Succeeded / Initialized:PodCompleted / Ready:PodCompleted / ContainersReady:PodCompleted
addons_test.go:962: (dbg) TestAddons/parallel/LocalPath: run=test-local-path healthy within 4.004040708s
addons_test.go:967: (dbg) Run:  kubectl --context addons-886198 get pvc test-pvc -o=json
addons_test.go:976: (dbg) Run:  out/minikube-linux-amd64 -p addons-886198 ssh "cat /opt/local-path-provisioner/pvc-ca63521e-9ce2-4ca2-ada6-1e83d03195c1_default_test-pvc/file1"
addons_test.go:988: (dbg) Run:  kubectl --context addons-886198 delete pod test-local-path
addons_test.go:992: (dbg) Run:  kubectl --context addons-886198 delete pvc test-pvc
addons_test.go:1053: (dbg) Run:  out/minikube-linux-amd64 -p addons-886198 addons disable storage-provisioner-rancher --alsologtostderr -v=1
--- PASS: TestAddons/parallel/LocalPath (10.05s)

                                                
                                    
x
+
TestAddons/parallel/NvidiaDevicePlugin (6.51s)

                                                
                                                
=== RUN   TestAddons/parallel/NvidiaDevicePlugin
=== PAUSE TestAddons/parallel/NvidiaDevicePlugin

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/NvidiaDevicePlugin
addons_test.go:1025: (dbg) TestAddons/parallel/NvidiaDevicePlugin: waiting 6m0s for pods matching "name=nvidia-device-plugin-ds" in namespace "kube-system" ...
helpers_test.go:352: "nvidia-device-plugin-daemonset-zfmfc" [2d9652de-65fa-42e7-877d-64fb516e8c13] Running
addons_test.go:1025: (dbg) TestAddons/parallel/NvidiaDevicePlugin: name=nvidia-device-plugin-ds healthy within 6.006976143s
addons_test.go:1053: (dbg) Run:  out/minikube-linux-amd64 -p addons-886198 addons disable nvidia-device-plugin --alsologtostderr -v=1
--- PASS: TestAddons/parallel/NvidiaDevicePlugin (6.51s)

                                                
                                    
x
+
TestAddons/parallel/Yakd (12.25s)

                                                
                                                
=== RUN   TestAddons/parallel/Yakd
=== PAUSE TestAddons/parallel/Yakd

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/Yakd
addons_test.go:1047: (dbg) TestAddons/parallel/Yakd: waiting 2m0s for pods matching "app.kubernetes.io/name=yakd-dashboard" in namespace "yakd-dashboard" ...
helpers_test.go:352: "yakd-dashboard-5ff678cb9-2mnsh" [f2d4211f-677f-4ed9-bcb4-4b66574e3e63] Running
addons_test.go:1047: (dbg) TestAddons/parallel/Yakd: app.kubernetes.io/name=yakd-dashboard healthy within 6.006215301s
addons_test.go:1053: (dbg) Run:  out/minikube-linux-amd64 -p addons-886198 addons disable yakd --alsologtostderr -v=1
addons_test.go:1053: (dbg) Done: out/minikube-linux-amd64 -p addons-886198 addons disable yakd --alsologtostderr -v=1: (6.237973267s)
--- PASS: TestAddons/parallel/Yakd (12.25s)

                                                
                                    
x
+
TestAddons/StoppedEnableDisable (13.75s)

                                                
                                                
=== RUN   TestAddons/StoppedEnableDisable
addons_test.go:172: (dbg) Run:  out/minikube-linux-amd64 stop -p addons-886198
addons_test.go:172: (dbg) Done: out/minikube-linux-amd64 stop -p addons-886198: (13.459648373s)
addons_test.go:176: (dbg) Run:  out/minikube-linux-amd64 addons enable dashboard -p addons-886198
addons_test.go:180: (dbg) Run:  out/minikube-linux-amd64 addons disable dashboard -p addons-886198
addons_test.go:185: (dbg) Run:  out/minikube-linux-amd64 addons disable gvisor -p addons-886198
--- PASS: TestAddons/StoppedEnableDisable (13.75s)

                                                
                                    
x
+
TestCertOptions (81.38s)

                                                
                                                
=== RUN   TestCertOptions
=== PAUSE TestCertOptions

                                                
                                                

                                                
                                                
=== CONT  TestCertOptions
cert_options_test.go:49: (dbg) Run:  out/minikube-linux-amd64 start -p cert-options-770314 --memory=3072 --apiserver-ips=127.0.0.1 --apiserver-ips=192.168.15.15 --apiserver-names=localhost --apiserver-names=www.google.com --apiserver-port=8555 --driver=kvm2  --auto-update-drivers=false
cert_options_test.go:49: (dbg) Done: out/minikube-linux-amd64 start -p cert-options-770314 --memory=3072 --apiserver-ips=127.0.0.1 --apiserver-ips=192.168.15.15 --apiserver-names=localhost --apiserver-names=www.google.com --apiserver-port=8555 --driver=kvm2  --auto-update-drivers=false: (1m19.812947462s)
cert_options_test.go:60: (dbg) Run:  out/minikube-linux-amd64 -p cert-options-770314 ssh "openssl x509 -text -noout -in /var/lib/minikube/certs/apiserver.crt"
cert_options_test.go:88: (dbg) Run:  kubectl --context cert-options-770314 config view
cert_options_test.go:100: (dbg) Run:  out/minikube-linux-amd64 ssh -p cert-options-770314 -- "sudo cat /etc/kubernetes/admin.conf"
helpers_test.go:175: Cleaning up "cert-options-770314" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p cert-options-770314
--- PASS: TestCertOptions (81.38s)

                                                
                                    
x
+
TestCertExpiration (310.24s)

                                                
                                                
=== RUN   TestCertExpiration
=== PAUSE TestCertExpiration

                                                
                                                

                                                
                                                
=== CONT  TestCertExpiration
cert_options_test.go:123: (dbg) Run:  out/minikube-linux-amd64 start -p cert-expiration-550750 --memory=3072 --cert-expiration=3m --driver=kvm2  --auto-update-drivers=false
cert_options_test.go:123: (dbg) Done: out/minikube-linux-amd64 start -p cert-expiration-550750 --memory=3072 --cert-expiration=3m --driver=kvm2  --auto-update-drivers=false: (1m15.392021846s)
E1018 12:19:49.463526    9909 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21647-6010/.minikube/profiles/skaffold-681264/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1018 12:19:49.469960    9909 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21647-6010/.minikube/profiles/skaffold-681264/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1018 12:19:49.481478    9909 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21647-6010/.minikube/profiles/skaffold-681264/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1018 12:19:49.503116    9909 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21647-6010/.minikube/profiles/skaffold-681264/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1018 12:19:49.544709    9909 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21647-6010/.minikube/profiles/skaffold-681264/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1018 12:19:49.626320    9909 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21647-6010/.minikube/profiles/skaffold-681264/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1018 12:19:49.787909    9909 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21647-6010/.minikube/profiles/skaffold-681264/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1018 12:19:50.109800    9909 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21647-6010/.minikube/profiles/skaffold-681264/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1018 12:19:50.751935    9909 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21647-6010/.minikube/profiles/skaffold-681264/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1018 12:19:52.033468    9909 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21647-6010/.minikube/profiles/skaffold-681264/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1018 12:19:54.595736    9909 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21647-6010/.minikube/profiles/skaffold-681264/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1018 12:19:59.718125    9909 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21647-6010/.minikube/profiles/skaffold-681264/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
cert_options_test.go:131: (dbg) Run:  out/minikube-linux-amd64 start -p cert-expiration-550750 --memory=3072 --cert-expiration=8760h --driver=kvm2  --auto-update-drivers=false
E1018 12:22:45.860839    9909 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21647-6010/.minikube/profiles/addons-886198/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
cert_options_test.go:131: (dbg) Done: out/minikube-linux-amd64 start -p cert-expiration-550750 --memory=3072 --cert-expiration=8760h --driver=kvm2  --auto-update-drivers=false: (53.936735813s)
helpers_test.go:175: Cleaning up "cert-expiration-550750" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p cert-expiration-550750
--- PASS: TestCertExpiration (310.24s)

                                                
                                    
x
+
TestDockerFlags (85.3s)

                                                
                                                
=== RUN   TestDockerFlags
=== PAUSE TestDockerFlags

                                                
                                                

                                                
                                                
=== CONT  TestDockerFlags
docker_test.go:51: (dbg) Run:  out/minikube-linux-amd64 start -p docker-flags-918844 --cache-images=false --memory=3072 --install-addons=false --wait=false --docker-env=FOO=BAR --docker-env=BAZ=BAT --docker-opt=debug --docker-opt=icc=true --alsologtostderr -v=5 --driver=kvm2  --auto-update-drivers=false
docker_test.go:51: (dbg) Done: out/minikube-linux-amd64 start -p docker-flags-918844 --cache-images=false --memory=3072 --install-addons=false --wait=false --docker-env=FOO=BAR --docker-env=BAZ=BAT --docker-opt=debug --docker-opt=icc=true --alsologtostderr -v=5 --driver=kvm2  --auto-update-drivers=false: (1m23.702304004s)
docker_test.go:56: (dbg) Run:  out/minikube-linux-amd64 -p docker-flags-918844 ssh "sudo systemctl show docker --property=Environment --no-pager"
docker_test.go:67: (dbg) Run:  out/minikube-linux-amd64 -p docker-flags-918844 ssh "sudo systemctl show docker --property=ExecStart --no-pager"
helpers_test.go:175: Cleaning up "docker-flags-918844" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p docker-flags-918844
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p docker-flags-918844: (1.017669542s)
--- PASS: TestDockerFlags (85.30s)

                                                
                                    
x
+
TestForceSystemdFlag (79.19s)

                                                
                                                
=== RUN   TestForceSystemdFlag
=== PAUSE TestForceSystemdFlag

                                                
                                                

                                                
                                                
=== CONT  TestForceSystemdFlag
docker_test.go:91: (dbg) Run:  out/minikube-linux-amd64 start -p force-systemd-flag-975288 --memory=3072 --force-systemd --alsologtostderr -v=5 --driver=kvm2  --auto-update-drivers=false
docker_test.go:91: (dbg) Done: out/minikube-linux-amd64 start -p force-systemd-flag-975288 --memory=3072 --force-systemd --alsologtostderr -v=5 --driver=kvm2  --auto-update-drivers=false: (1m18.043391305s)
docker_test.go:110: (dbg) Run:  out/minikube-linux-amd64 -p force-systemd-flag-975288 ssh "docker info --format {{.CgroupDriver}}"
helpers_test.go:175: Cleaning up "force-systemd-flag-975288" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p force-systemd-flag-975288
--- PASS: TestForceSystemdFlag (79.19s)

                                                
                                    
x
+
TestForceSystemdEnv (72.52s)

                                                
                                                
=== RUN   TestForceSystemdEnv
=== PAUSE TestForceSystemdEnv

                                                
                                                

                                                
                                                
=== CONT  TestForceSystemdEnv
docker_test.go:155: (dbg) Run:  out/minikube-linux-amd64 start -p force-systemd-env-934669 --memory=3072 --alsologtostderr -v=5 --driver=kvm2  --auto-update-drivers=false
docker_test.go:155: (dbg) Done: out/minikube-linux-amd64 start -p force-systemd-env-934669 --memory=3072 --alsologtostderr -v=5 --driver=kvm2  --auto-update-drivers=false: (1m11.351171634s)
docker_test.go:110: (dbg) Run:  out/minikube-linux-amd64 -p force-systemd-env-934669 ssh "docker info --format {{.CgroupDriver}}"
helpers_test.go:175: Cleaning up "force-systemd-env-934669" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p force-systemd-env-934669
--- PASS: TestForceSystemdEnv (72.52s)

                                                
                                    
x
+
TestKVMDriverInstallOrUpdate (0.84s)

                                                
                                                
=== RUN   TestKVMDriverInstallOrUpdate
=== PAUSE TestKVMDriverInstallOrUpdate

                                                
                                                

                                                
                                                
=== CONT  TestKVMDriverInstallOrUpdate
I1018 12:18:10.014331    9909 install.go:66] acquiring lock: {Name:mk900956b073697a4aa6c80a27c6bb0742a99a53 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
I1018 12:18:10.014493    9909 install.go:138] Validating docker-machine-driver-kvm2, PATH=/tmp/TestKVMDriverInstallOrUpdate1141736593/001:/home/jenkins/workspace/KVM_Linux_integration/out/:/usr/local/bin:/usr/bin:/bin:/usr/local/games:/usr/games:/usr/local/go/bin:/home/jenkins/go/bin:/usr/local/bin/:/usr/local/go/bin/:/home/jenkins/go/bin
I1018 12:18:10.044532    9909 install.go:163] /tmp/TestKVMDriverInstallOrUpdate1141736593/001/docker-machine-driver-kvm2 version is 1.1.1
W1018 12:18:10.044574    9909 install.go:76] docker-machine-driver-kvm2: docker-machine-driver-kvm2 is version 1.1.1, want 1.37.0
W1018 12:18:10.044703    9909 out.go:176] [unset outFile]: * Downloading driver docker-machine-driver-kvm2:
I1018 12:18:10.044742    9909 download.go:108] Downloading: https://github.com/kubernetes/minikube/releases/download/v1.37.0/docker-machine-driver-kvm2-amd64?checksum=file:https://github.com/kubernetes/minikube/releases/download/v1.37.0/docker-machine-driver-kvm2-amd64.sha256 -> /tmp/TestKVMDriverInstallOrUpdate1141736593/001/docker-machine-driver-kvm2
I1018 12:18:10.713391    9909 install.go:138] Validating docker-machine-driver-kvm2, PATH=/tmp/TestKVMDriverInstallOrUpdate1141736593/001:/home/jenkins/workspace/KVM_Linux_integration/out/:/usr/local/bin:/usr/bin:/bin:/usr/local/games:/usr/games:/usr/local/go/bin:/home/jenkins/go/bin:/usr/local/bin/:/usr/local/go/bin/:/home/jenkins/go/bin
I1018 12:18:10.730114    9909 install.go:163] /tmp/TestKVMDriverInstallOrUpdate1141736593/001/docker-machine-driver-kvm2 version is 1.37.0
--- PASS: TestKVMDriverInstallOrUpdate (0.84s)

                                                
                                    
x
+
TestErrorSpam/setup (41.07s)

                                                
                                                
=== RUN   TestErrorSpam/setup
error_spam_test.go:81: (dbg) Run:  out/minikube-linux-amd64 start -p nospam-661083 -n=1 --memory=3072 --wait=false --log_dir=/tmp/nospam-661083 --driver=kvm2  --auto-update-drivers=false
error_spam_test.go:81: (dbg) Done: out/minikube-linux-amd64 start -p nospam-661083 -n=1 --memory=3072 --wait=false --log_dir=/tmp/nospam-661083 --driver=kvm2  --auto-update-drivers=false: (41.069520086s)
--- PASS: TestErrorSpam/setup (41.07s)

                                                
                                    
x
+
TestErrorSpam/start (0.37s)

                                                
                                                
=== RUN   TestErrorSpam/start
error_spam_test.go:206: Cleaning up 1 logfile(s) ...
error_spam_test.go:149: (dbg) Run:  out/minikube-linux-amd64 -p nospam-661083 --log_dir /tmp/nospam-661083 start --dry-run
error_spam_test.go:149: (dbg) Run:  out/minikube-linux-amd64 -p nospam-661083 --log_dir /tmp/nospam-661083 start --dry-run
error_spam_test.go:172: (dbg) Run:  out/minikube-linux-amd64 -p nospam-661083 --log_dir /tmp/nospam-661083 start --dry-run
--- PASS: TestErrorSpam/start (0.37s)

                                                
                                    
x
+
TestErrorSpam/status (0.81s)

                                                
                                                
=== RUN   TestErrorSpam/status
error_spam_test.go:206: Cleaning up 0 logfile(s) ...
error_spam_test.go:149: (dbg) Run:  out/minikube-linux-amd64 -p nospam-661083 --log_dir /tmp/nospam-661083 status
error_spam_test.go:149: (dbg) Run:  out/minikube-linux-amd64 -p nospam-661083 --log_dir /tmp/nospam-661083 status
error_spam_test.go:172: (dbg) Run:  out/minikube-linux-amd64 -p nospam-661083 --log_dir /tmp/nospam-661083 status
--- PASS: TestErrorSpam/status (0.81s)

                                                
                                    
x
+
TestErrorSpam/pause (1.4s)

                                                
                                                
=== RUN   TestErrorSpam/pause
error_spam_test.go:206: Cleaning up 0 logfile(s) ...
error_spam_test.go:149: (dbg) Run:  out/minikube-linux-amd64 -p nospam-661083 --log_dir /tmp/nospam-661083 pause
error_spam_test.go:149: (dbg) Run:  out/minikube-linux-amd64 -p nospam-661083 --log_dir /tmp/nospam-661083 pause
error_spam_test.go:172: (dbg) Run:  out/minikube-linux-amd64 -p nospam-661083 --log_dir /tmp/nospam-661083 pause
--- PASS: TestErrorSpam/pause (1.40s)

                                                
                                    
x
+
TestErrorSpam/unpause (1.66s)

                                                
                                                
=== RUN   TestErrorSpam/unpause
error_spam_test.go:206: Cleaning up 0 logfile(s) ...
error_spam_test.go:149: (dbg) Run:  out/minikube-linux-amd64 -p nospam-661083 --log_dir /tmp/nospam-661083 unpause
error_spam_test.go:149: (dbg) Run:  out/minikube-linux-amd64 -p nospam-661083 --log_dir /tmp/nospam-661083 unpause
error_spam_test.go:172: (dbg) Run:  out/minikube-linux-amd64 -p nospam-661083 --log_dir /tmp/nospam-661083 unpause
--- PASS: TestErrorSpam/unpause (1.66s)

                                                
                                    
x
+
TestErrorSpam/stop (5.34s)

                                                
                                                
=== RUN   TestErrorSpam/stop
error_spam_test.go:206: Cleaning up 0 logfile(s) ...
error_spam_test.go:149: (dbg) Run:  out/minikube-linux-amd64 -p nospam-661083 --log_dir /tmp/nospam-661083 stop
error_spam_test.go:149: (dbg) Done: out/minikube-linux-amd64 -p nospam-661083 --log_dir /tmp/nospam-661083 stop: (2.341026454s)
error_spam_test.go:149: (dbg) Run:  out/minikube-linux-amd64 -p nospam-661083 --log_dir /tmp/nospam-661083 stop
error_spam_test.go:149: (dbg) Done: out/minikube-linux-amd64 -p nospam-661083 --log_dir /tmp/nospam-661083 stop: (1.834197057s)
error_spam_test.go:172: (dbg) Run:  out/minikube-linux-amd64 -p nospam-661083 --log_dir /tmp/nospam-661083 stop
error_spam_test.go:172: (dbg) Done: out/minikube-linux-amd64 -p nospam-661083 --log_dir /tmp/nospam-661083 stop: (1.16481309s)
--- PASS: TestErrorSpam/stop (5.34s)

                                                
                                    
x
+
TestFunctional/serial/CopySyncFile (0s)

                                                
                                                
=== RUN   TestFunctional/serial/CopySyncFile
functional_test.go:1860: local sync path: /home/jenkins/minikube-integration/21647-6010/.minikube/files/etc/test/nested/copy/9909/hosts
--- PASS: TestFunctional/serial/CopySyncFile (0.00s)

                                                
                                    
x
+
TestFunctional/serial/StartWithProxy (88.17s)

                                                
                                                
=== RUN   TestFunctional/serial/StartWithProxy
functional_test.go:2239: (dbg) Run:  out/minikube-linux-amd64 start -p functional-897621 --memory=4096 --apiserver-port=8441 --wait=all --driver=kvm2  --auto-update-drivers=false
functional_test.go:2239: (dbg) Done: out/minikube-linux-amd64 start -p functional-897621 --memory=4096 --apiserver-port=8441 --wait=all --driver=kvm2  --auto-update-drivers=false: (1m28.170759228s)
--- PASS: TestFunctional/serial/StartWithProxy (88.17s)

                                                
                                    
x
+
TestFunctional/serial/AuditLog (0s)

                                                
                                                
=== RUN   TestFunctional/serial/AuditLog
--- PASS: TestFunctional/serial/AuditLog (0.00s)

                                                
                                    
x
+
TestFunctional/serial/SoftStart (67.67s)

                                                
                                                
=== RUN   TestFunctional/serial/SoftStart
I1018 11:37:21.515885    9909 config.go:182] Loaded profile config "functional-897621": Driver=kvm2, ContainerRuntime=docker, KubernetesVersion=v1.34.1
functional_test.go:674: (dbg) Run:  out/minikube-linux-amd64 start -p functional-897621 --alsologtostderr -v=8
E1018 11:37:45.861214    9909 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21647-6010/.minikube/profiles/addons-886198/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1018 11:37:45.875379    9909 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21647-6010/.minikube/profiles/addons-886198/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1018 11:37:45.886868    9909 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21647-6010/.minikube/profiles/addons-886198/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1018 11:37:45.908364    9909 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21647-6010/.minikube/profiles/addons-886198/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1018 11:37:45.949928    9909 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21647-6010/.minikube/profiles/addons-886198/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1018 11:37:46.031387    9909 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21647-6010/.minikube/profiles/addons-886198/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1018 11:37:46.193022    9909 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21647-6010/.minikube/profiles/addons-886198/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1018 11:37:46.514752    9909 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21647-6010/.minikube/profiles/addons-886198/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1018 11:37:47.156899    9909 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21647-6010/.minikube/profiles/addons-886198/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1018 11:37:48.438606    9909 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21647-6010/.minikube/profiles/addons-886198/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1018 11:37:51.001533    9909 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21647-6010/.minikube/profiles/addons-886198/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1018 11:37:56.123789    9909 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21647-6010/.minikube/profiles/addons-886198/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1018 11:38:06.366147    9909 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21647-6010/.minikube/profiles/addons-886198/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1018 11:38:26.847773    9909 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21647-6010/.minikube/profiles/addons-886198/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
functional_test.go:674: (dbg) Done: out/minikube-linux-amd64 start -p functional-897621 --alsologtostderr -v=8: (1m7.673042641s)
functional_test.go:678: soft start took 1m7.673641556s for "functional-897621" cluster.
I1018 11:38:29.189376    9909 config.go:182] Loaded profile config "functional-897621": Driver=kvm2, ContainerRuntime=docker, KubernetesVersion=v1.34.1
--- PASS: TestFunctional/serial/SoftStart (67.67s)

                                                
                                    
x
+
TestFunctional/serial/KubeContext (0.05s)

                                                
                                                
=== RUN   TestFunctional/serial/KubeContext
functional_test.go:696: (dbg) Run:  kubectl config current-context
--- PASS: TestFunctional/serial/KubeContext (0.05s)

                                                
                                    
x
+
TestFunctional/serial/KubectlGetPods (0.08s)

                                                
                                                
=== RUN   TestFunctional/serial/KubectlGetPods
functional_test.go:711: (dbg) Run:  kubectl --context functional-897621 get po -A
--- PASS: TestFunctional/serial/KubectlGetPods (0.08s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/add_remote (2.34s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/add_remote
functional_test.go:1064: (dbg) Run:  out/minikube-linux-amd64 -p functional-897621 cache add registry.k8s.io/pause:3.1
functional_test.go:1064: (dbg) Run:  out/minikube-linux-amd64 -p functional-897621 cache add registry.k8s.io/pause:3.3
functional_test.go:1064: (dbg) Run:  out/minikube-linux-amd64 -p functional-897621 cache add registry.k8s.io/pause:latest
--- PASS: TestFunctional/serial/CacheCmd/cache/add_remote (2.34s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/add_local (1.31s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/add_local
functional_test.go:1092: (dbg) Run:  docker build -t minikube-local-cache-test:functional-897621 /tmp/TestFunctionalserialCacheCmdcacheadd_local3993547106/001
functional_test.go:1104: (dbg) Run:  out/minikube-linux-amd64 -p functional-897621 cache add minikube-local-cache-test:functional-897621
functional_test.go:1109: (dbg) Run:  out/minikube-linux-amd64 -p functional-897621 cache delete minikube-local-cache-test:functional-897621
functional_test.go:1098: (dbg) Run:  docker rmi minikube-local-cache-test:functional-897621
--- PASS: TestFunctional/serial/CacheCmd/cache/add_local (1.31s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/CacheDelete (0.05s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/CacheDelete
functional_test.go:1117: (dbg) Run:  out/minikube-linux-amd64 cache delete registry.k8s.io/pause:3.3
--- PASS: TestFunctional/serial/CacheCmd/cache/CacheDelete (0.05s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/list (0.05s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/list
functional_test.go:1125: (dbg) Run:  out/minikube-linux-amd64 cache list
--- PASS: TestFunctional/serial/CacheCmd/cache/list (0.05s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/verify_cache_inside_node (0.23s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/verify_cache_inside_node
functional_test.go:1139: (dbg) Run:  out/minikube-linux-amd64 -p functional-897621 ssh sudo crictl images
--- PASS: TestFunctional/serial/CacheCmd/cache/verify_cache_inside_node (0.23s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/cache_reload (1.2s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/cache_reload
functional_test.go:1162: (dbg) Run:  out/minikube-linux-amd64 -p functional-897621 ssh sudo docker rmi registry.k8s.io/pause:latest
functional_test.go:1168: (dbg) Run:  out/minikube-linux-amd64 -p functional-897621 ssh sudo crictl inspecti registry.k8s.io/pause:latest
functional_test.go:1168: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-897621 ssh sudo crictl inspecti registry.k8s.io/pause:latest: exit status 1 (230.427534ms)

                                                
                                                
-- stdout --
	FATA[0000] no such image "registry.k8s.io/pause:latest" present 

                                                
                                                
-- /stdout --
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
functional_test.go:1173: (dbg) Run:  out/minikube-linux-amd64 -p functional-897621 cache reload
functional_test.go:1178: (dbg) Run:  out/minikube-linux-amd64 -p functional-897621 ssh sudo crictl inspecti registry.k8s.io/pause:latest
--- PASS: TestFunctional/serial/CacheCmd/cache/cache_reload (1.20s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/delete (0.11s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/delete
functional_test.go:1187: (dbg) Run:  out/minikube-linux-amd64 cache delete registry.k8s.io/pause:3.1
functional_test.go:1187: (dbg) Run:  out/minikube-linux-amd64 cache delete registry.k8s.io/pause:latest
--- PASS: TestFunctional/serial/CacheCmd/cache/delete (0.11s)

                                                
                                    
x
+
TestFunctional/serial/MinikubeKubectlCmd (0.11s)

                                                
                                                
=== RUN   TestFunctional/serial/MinikubeKubectlCmd
functional_test.go:731: (dbg) Run:  out/minikube-linux-amd64 -p functional-897621 kubectl -- --context functional-897621 get pods
--- PASS: TestFunctional/serial/MinikubeKubectlCmd (0.11s)

                                                
                                    
x
+
TestFunctional/serial/MinikubeKubectlCmdDirectly (0.11s)

                                                
                                                
=== RUN   TestFunctional/serial/MinikubeKubectlCmdDirectly
functional_test.go:756: (dbg) Run:  out/kubectl --context functional-897621 get pods
--- PASS: TestFunctional/serial/MinikubeKubectlCmdDirectly (0.11s)

                                                
                                    
x
+
TestFunctional/serial/ExtraConfig (54.41s)

                                                
                                                
=== RUN   TestFunctional/serial/ExtraConfig
functional_test.go:772: (dbg) Run:  out/minikube-linux-amd64 start -p functional-897621 --extra-config=apiserver.enable-admission-plugins=NamespaceAutoProvision --wait=all
E1018 11:39:07.810489    9909 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21647-6010/.minikube/profiles/addons-886198/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
functional_test.go:772: (dbg) Done: out/minikube-linux-amd64 start -p functional-897621 --extra-config=apiserver.enable-admission-plugins=NamespaceAutoProvision --wait=all: (54.412334277s)
functional_test.go:776: restart took 54.412449721s for "functional-897621" cluster.
I1018 11:39:29.244315    9909 config.go:182] Loaded profile config "functional-897621": Driver=kvm2, ContainerRuntime=docker, KubernetesVersion=v1.34.1
--- PASS: TestFunctional/serial/ExtraConfig (54.41s)

                                                
                                    
x
+
TestFunctional/serial/ComponentHealth (0.08s)

                                                
                                                
=== RUN   TestFunctional/serial/ComponentHealth
functional_test.go:825: (dbg) Run:  kubectl --context functional-897621 get po -l tier=control-plane -n kube-system -o=json
functional_test.go:840: etcd phase: Running
functional_test.go:850: etcd status: Ready
functional_test.go:840: kube-apiserver phase: Running
functional_test.go:850: kube-apiserver status: Ready
functional_test.go:840: kube-controller-manager phase: Running
functional_test.go:850: kube-controller-manager status: Ready
functional_test.go:840: kube-scheduler phase: Running
functional_test.go:850: kube-scheduler status: Ready
--- PASS: TestFunctional/serial/ComponentHealth (0.08s)

                                                
                                    
x
+
TestFunctional/serial/LogsCmd (1.08s)

                                                
                                                
=== RUN   TestFunctional/serial/LogsCmd
functional_test.go:1251: (dbg) Run:  out/minikube-linux-amd64 -p functional-897621 logs
functional_test.go:1251: (dbg) Done: out/minikube-linux-amd64 -p functional-897621 logs: (1.079084174s)
--- PASS: TestFunctional/serial/LogsCmd (1.08s)

                                                
                                    
x
+
TestFunctional/serial/LogsFileCmd (1.1s)

                                                
                                                
=== RUN   TestFunctional/serial/LogsFileCmd
functional_test.go:1265: (dbg) Run:  out/minikube-linux-amd64 -p functional-897621 logs --file /tmp/TestFunctionalserialLogsFileCmd431371890/001/logs.txt
functional_test.go:1265: (dbg) Done: out/minikube-linux-amd64 -p functional-897621 logs --file /tmp/TestFunctionalserialLogsFileCmd431371890/001/logs.txt: (1.094104164s)
--- PASS: TestFunctional/serial/LogsFileCmd (1.10s)

                                                
                                    
x
+
TestFunctional/serial/InvalidService (4.14s)

                                                
                                                
=== RUN   TestFunctional/serial/InvalidService
functional_test.go:2326: (dbg) Run:  kubectl --context functional-897621 apply -f testdata/invalidsvc.yaml
functional_test.go:2340: (dbg) Run:  out/minikube-linux-amd64 service invalid-svc -p functional-897621
functional_test.go:2340: (dbg) Non-zero exit: out/minikube-linux-amd64 service invalid-svc -p functional-897621: exit status 115 (295.629148ms)

                                                
                                                
-- stdout --
	┌───────────┬─────────────┬─────────────┬─────────────────────────────┐
	│ NAMESPACE │    NAME     │ TARGET PORT │             URL             │
	├───────────┼─────────────┼─────────────┼─────────────────────────────┤
	│ default   │ invalid-svc │ 80          │ http://192.168.39.228:32188 │
	└───────────┴─────────────┴─────────────┴─────────────────────────────┘
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to SVC_UNREACHABLE: service not available: no running pod for service invalid-svc found
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_service_96b204199e3191fa1740d4430b018a3c8028d52d_0.log                 │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
functional_test.go:2332: (dbg) Run:  kubectl --context functional-897621 delete -f testdata/invalidsvc.yaml
--- PASS: TestFunctional/serial/InvalidService (4.14s)

                                                
                                    
x
+
TestFunctional/parallel/ConfigCmd (0.37s)

                                                
                                                
=== RUN   TestFunctional/parallel/ConfigCmd
=== PAUSE TestFunctional/parallel/ConfigCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ConfigCmd
functional_test.go:1214: (dbg) Run:  out/minikube-linux-amd64 -p functional-897621 config unset cpus
functional_test.go:1214: (dbg) Run:  out/minikube-linux-amd64 -p functional-897621 config get cpus
functional_test.go:1214: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-897621 config get cpus: exit status 14 (66.208246ms)

                                                
                                                
** stderr ** 
	Error: specified key could not be found in config

                                                
                                                
** /stderr **
functional_test.go:1214: (dbg) Run:  out/minikube-linux-amd64 -p functional-897621 config set cpus 2
functional_test.go:1214: (dbg) Run:  out/minikube-linux-amd64 -p functional-897621 config get cpus
functional_test.go:1214: (dbg) Run:  out/minikube-linux-amd64 -p functional-897621 config unset cpus
functional_test.go:1214: (dbg) Run:  out/minikube-linux-amd64 -p functional-897621 config get cpus
functional_test.go:1214: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-897621 config get cpus: exit status 14 (54.008737ms)

                                                
                                                
** stderr ** 
	Error: specified key could not be found in config

                                                
                                                
** /stderr **
--- PASS: TestFunctional/parallel/ConfigCmd (0.37s)

                                                
                                    
x
+
TestFunctional/parallel/DashboardCmd (29.48s)

                                                
                                                
=== RUN   TestFunctional/parallel/DashboardCmd
=== PAUSE TestFunctional/parallel/DashboardCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/DashboardCmd
functional_test.go:920: (dbg) daemon: [out/minikube-linux-amd64 dashboard --url --port 36195 -p functional-897621 --alsologtostderr -v=1]
functional_test.go:925: (dbg) stopping [out/minikube-linux-amd64 dashboard --url --port 36195 -p functional-897621 --alsologtostderr -v=1] ...
helpers_test.go:525: unable to kill pid 18684: os: process already finished
--- PASS: TestFunctional/parallel/DashboardCmd (29.48s)

                                                
                                    
x
+
TestFunctional/parallel/DryRun (0.3s)

                                                
                                                
=== RUN   TestFunctional/parallel/DryRun
=== PAUSE TestFunctional/parallel/DryRun

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/DryRun
functional_test.go:989: (dbg) Run:  out/minikube-linux-amd64 start -p functional-897621 --dry-run --memory 250MB --alsologtostderr --driver=kvm2  --auto-update-drivers=false
functional_test.go:989: (dbg) Non-zero exit: out/minikube-linux-amd64 start -p functional-897621 --dry-run --memory 250MB --alsologtostderr --driver=kvm2  --auto-update-drivers=false: exit status 23 (147.863099ms)

                                                
                                                
-- stdout --
	* [functional-897621] minikube v1.37.0 on Ubuntu 22.04 (kvm/amd64)
	  - MINIKUBE_LOCATION=21647
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/21647-6010/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/21647-6010/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-amd64
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the kvm2 driver based on existing profile
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1018 11:39:47.773564   17551 out.go:360] Setting OutFile to fd 1 ...
	I1018 11:39:47.774354   17551 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1018 11:39:47.774374   17551 out.go:374] Setting ErrFile to fd 2...
	I1018 11:39:47.774382   17551 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1018 11:39:47.774840   17551 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21647-6010/.minikube/bin
	I1018 11:39:47.775752   17551 out.go:368] Setting JSON to false
	I1018 11:39:47.776683   17551 start.go:131] hostinfo: {"hostname":"ubuntu-20-agent-7","uptime":1335,"bootTime":1760786253,"procs":248,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"22.04","kernelVersion":"6.8.0-1041-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I1018 11:39:47.776777   17551 start.go:141] virtualization: kvm guest
	I1018 11:39:47.778992   17551 out.go:179] * [functional-897621] minikube v1.37.0 on Ubuntu 22.04 (kvm/amd64)
	I1018 11:39:47.780490   17551 out.go:179]   - MINIKUBE_LOCATION=21647
	I1018 11:39:47.780489   17551 notify.go:220] Checking for updates...
	I1018 11:39:47.783448   17551 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1018 11:39:47.785008   17551 out.go:179]   - KUBECONFIG=/home/jenkins/minikube-integration/21647-6010/kubeconfig
	I1018 11:39:47.788540   17551 out.go:179]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/21647-6010/.minikube
	I1018 11:39:47.790026   17551 out.go:179]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I1018 11:39:47.791323   17551 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I1018 11:39:47.792962   17551 config.go:182] Loaded profile config "functional-897621": Driver=kvm2, ContainerRuntime=docker, KubernetesVersion=v1.34.1
	I1018 11:39:47.793394   17551 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_integration/out/docker-machine-driver-kvm2
	I1018 11:39:47.793470   17551 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1018 11:39:47.808075   17551 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:40733
	I1018 11:39:47.808549   17551 main.go:141] libmachine: () Calling .GetVersion
	I1018 11:39:47.809095   17551 main.go:141] libmachine: Using API Version  1
	I1018 11:39:47.809117   17551 main.go:141] libmachine: () Calling .SetConfigRaw
	I1018 11:39:47.809489   17551 main.go:141] libmachine: () Calling .GetMachineName
	I1018 11:39:47.809703   17551 main.go:141] libmachine: (functional-897621) Calling .DriverName
	I1018 11:39:47.809975   17551 driver.go:421] Setting default libvirt URI to qemu:///system
	I1018 11:39:47.810550   17551 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_integration/out/docker-machine-driver-kvm2
	I1018 11:39:47.810616   17551 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1018 11:39:47.824740   17551 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:37909
	I1018 11:39:47.825347   17551 main.go:141] libmachine: () Calling .GetVersion
	I1018 11:39:47.825886   17551 main.go:141] libmachine: Using API Version  1
	I1018 11:39:47.825914   17551 main.go:141] libmachine: () Calling .SetConfigRaw
	I1018 11:39:47.826278   17551 main.go:141] libmachine: () Calling .GetMachineName
	I1018 11:39:47.826477   17551 main.go:141] libmachine: (functional-897621) Calling .DriverName
	I1018 11:39:47.860495   17551 out.go:179] * Using the kvm2 driver based on existing profile
	I1018 11:39:47.862058   17551 start.go:305] selected driver: kvm2
	I1018 11:39:47.862077   17551 start.go:925] validating driver "kvm2" against &{Name:functional-897621 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/21757/minikube-v1.37.0-1760609724-21757-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760609789-21757@sha256:9824b20f4774128fcb298ad0e6cac7649729886cfba9d444b2305c743a5044c6 Memory:4096 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kuber
netesVersion:v1.34.1 ClusterName:functional-897621 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:apiserver Key:enable-admission-plugins Value:NamespaceAutoProvision}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.228 Port:8441 KubernetesVersion:v1.34.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0
s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1018 11:39:47.862169   17551 start.go:936] status for kvm2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1018 11:39:47.864389   17551 out.go:203] 
	W1018 11:39:47.865894   17551 out.go:285] X Exiting due to RSRC_INSUFFICIENT_REQ_MEMORY: Requested memory allocation 250MiB is less than the usable minimum of 1800MB
	X Exiting due to RSRC_INSUFFICIENT_REQ_MEMORY: Requested memory allocation 250MiB is less than the usable minimum of 1800MB
	I1018 11:39:47.867153   17551 out.go:203] 

                                                
                                                
** /stderr **
functional_test.go:1006: (dbg) Run:  out/minikube-linux-amd64 start -p functional-897621 --dry-run --alsologtostderr -v=1 --driver=kvm2  --auto-update-drivers=false
--- PASS: TestFunctional/parallel/DryRun (0.30s)

                                                
                                    
x
+
TestFunctional/parallel/InternationalLanguage (0.15s)

                                                
                                                
=== RUN   TestFunctional/parallel/InternationalLanguage
=== PAUSE TestFunctional/parallel/InternationalLanguage

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/InternationalLanguage
functional_test.go:1035: (dbg) Run:  out/minikube-linux-amd64 start -p functional-897621 --dry-run --memory 250MB --alsologtostderr --driver=kvm2  --auto-update-drivers=false
functional_test.go:1035: (dbg) Non-zero exit: out/minikube-linux-amd64 start -p functional-897621 --dry-run --memory 250MB --alsologtostderr --driver=kvm2  --auto-update-drivers=false: exit status 23 (153.08154ms)

                                                
                                                
-- stdout --
	* [functional-897621] minikube v1.37.0 sur Ubuntu 22.04 (kvm/amd64)
	  - MINIKUBE_LOCATION=21647
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/21647-6010/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/21647-6010/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-amd64
	  - MINIKUBE_FORCE_SYSTEMD=
	* Utilisation du pilote kvm2 basé sur le profil existant
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1018 11:39:48.072254   17619 out.go:360] Setting OutFile to fd 1 ...
	I1018 11:39:48.072423   17619 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1018 11:39:48.072433   17619 out.go:374] Setting ErrFile to fd 2...
	I1018 11:39:48.072436   17619 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1018 11:39:48.072753   17619 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21647-6010/.minikube/bin
	I1018 11:39:48.073227   17619 out.go:368] Setting JSON to false
	I1018 11:39:48.074203   17619 start.go:131] hostinfo: {"hostname":"ubuntu-20-agent-7","uptime":1335,"bootTime":1760786253,"procs":251,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"22.04","kernelVersion":"6.8.0-1041-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I1018 11:39:48.074344   17619 start.go:141] virtualization: kvm guest
	I1018 11:39:48.076339   17619 out.go:179] * [functional-897621] minikube v1.37.0 sur Ubuntu 22.04 (kvm/amd64)
	I1018 11:39:48.078244   17619 notify.go:220] Checking for updates...
	I1018 11:39:48.078279   17619 out.go:179]   - MINIKUBE_LOCATION=21647
	I1018 11:39:48.080027   17619 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1018 11:39:48.081678   17619 out.go:179]   - KUBECONFIG=/home/jenkins/minikube-integration/21647-6010/kubeconfig
	I1018 11:39:48.086549   17619 out.go:179]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/21647-6010/.minikube
	I1018 11:39:48.088259   17619 out.go:179]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I1018 11:39:48.089852   17619 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I1018 11:39:48.091757   17619 config.go:182] Loaded profile config "functional-897621": Driver=kvm2, ContainerRuntime=docker, KubernetesVersion=v1.34.1
	I1018 11:39:48.092185   17619 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_integration/out/docker-machine-driver-kvm2
	I1018 11:39:48.092272   17619 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1018 11:39:48.107703   17619 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:38837
	I1018 11:39:48.108224   17619 main.go:141] libmachine: () Calling .GetVersion
	I1018 11:39:48.108787   17619 main.go:141] libmachine: Using API Version  1
	I1018 11:39:48.108810   17619 main.go:141] libmachine: () Calling .SetConfigRaw
	I1018 11:39:48.109345   17619 main.go:141] libmachine: () Calling .GetMachineName
	I1018 11:39:48.109572   17619 main.go:141] libmachine: (functional-897621) Calling .DriverName
	I1018 11:39:48.109883   17619 driver.go:421] Setting default libvirt URI to qemu:///system
	I1018 11:39:48.110371   17619 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_integration/out/docker-machine-driver-kvm2
	I1018 11:39:48.110434   17619 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1018 11:39:48.124683   17619 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:42455
	I1018 11:39:48.125158   17619 main.go:141] libmachine: () Calling .GetVersion
	I1018 11:39:48.125756   17619 main.go:141] libmachine: Using API Version  1
	I1018 11:39:48.125782   17619 main.go:141] libmachine: () Calling .SetConfigRaw
	I1018 11:39:48.126151   17619 main.go:141] libmachine: () Calling .GetMachineName
	I1018 11:39:48.126383   17619 main.go:141] libmachine: (functional-897621) Calling .DriverName
	I1018 11:39:48.159183   17619 out.go:179] * Utilisation du pilote kvm2 basé sur le profil existant
	I1018 11:39:48.160626   17619 start.go:305] selected driver: kvm2
	I1018 11:39:48.160651   17619 start.go:925] validating driver "kvm2" against &{Name:functional-897621 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/21757/minikube-v1.37.0-1760609724-21757-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760609789-21757@sha256:9824b20f4774128fcb298ad0e6cac7649729886cfba9d444b2305c743a5044c6 Memory:4096 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kuber
netesVersion:v1.34.1 ClusterName:functional-897621 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:apiserver Key:enable-admission-plugins Value:NamespaceAutoProvision}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.228 Port:8441 KubernetesVersion:v1.34.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0
s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1018 11:39:48.160811   17619 start.go:936] status for kvm2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1018 11:39:48.163326   17619 out.go:203] 
	W1018 11:39:48.164627   17619 out.go:285] X Fermeture en raison de RSRC_INSUFFICIENT_REQ_MEMORY : L'allocation de mémoire demandée 250 Mio est inférieure au minimum utilisable de 1800 Mo
	X Fermeture en raison de RSRC_INSUFFICIENT_REQ_MEMORY : L'allocation de mémoire demandée 250 Mio est inférieure au minimum utilisable de 1800 Mo
	I1018 11:39:48.166068   17619 out.go:203] 

                                                
                                                
** /stderr **
--- PASS: TestFunctional/parallel/InternationalLanguage (0.15s)

                                                
                                    
x
+
TestFunctional/parallel/StatusCmd (0.8s)

                                                
                                                
=== RUN   TestFunctional/parallel/StatusCmd
=== PAUSE TestFunctional/parallel/StatusCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/StatusCmd
functional_test.go:869: (dbg) Run:  out/minikube-linux-amd64 -p functional-897621 status
functional_test.go:875: (dbg) Run:  out/minikube-linux-amd64 -p functional-897621 status -f host:{{.Host}},kublet:{{.Kubelet}},apiserver:{{.APIServer}},kubeconfig:{{.Kubeconfig}}
functional_test.go:887: (dbg) Run:  out/minikube-linux-amd64 -p functional-897621 status -o json
--- PASS: TestFunctional/parallel/StatusCmd (0.80s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmdConnect (9.57s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmdConnect
=== PAUSE TestFunctional/parallel/ServiceCmdConnect

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ServiceCmdConnect
functional_test.go:1636: (dbg) Run:  kubectl --context functional-897621 create deployment hello-node-connect --image kicbase/echo-server
functional_test.go:1640: (dbg) Run:  kubectl --context functional-897621 expose deployment hello-node-connect --type=NodePort --port=8080
functional_test.go:1645: (dbg) TestFunctional/parallel/ServiceCmdConnect: waiting 10m0s for pods matching "app=hello-node-connect" in namespace "default" ...
helpers_test.go:352: "hello-node-connect-7d85dfc575-pxfkr" [39c5a8b0-ba1b-4190-88b9-25642ec9ccb7] Pending / Ready:ContainersNotReady (containers with unready status: [echo-server]) / ContainersReady:ContainersNotReady (containers with unready status: [echo-server])
helpers_test.go:352: "hello-node-connect-7d85dfc575-pxfkr" [39c5a8b0-ba1b-4190-88b9-25642ec9ccb7] Running
functional_test.go:1645: (dbg) TestFunctional/parallel/ServiceCmdConnect: app=hello-node-connect healthy within 9.006674599s
functional_test.go:1654: (dbg) Run:  out/minikube-linux-amd64 -p functional-897621 service hello-node-connect --url
functional_test.go:1660: found endpoint for hello-node-connect: http://192.168.39.228:31079
functional_test.go:1680: http://192.168.39.228:31079: success! body:
Request served by hello-node-connect-7d85dfc575-pxfkr

                                                
                                                
HTTP/1.1 GET /

                                                
                                                
Host: 192.168.39.228:31079
Accept-Encoding: gzip
User-Agent: Go-http-client/1.1
--- PASS: TestFunctional/parallel/ServiceCmdConnect (9.57s)

                                                
                                    
x
+
TestFunctional/parallel/AddonsCmd (0.14s)

                                                
                                                
=== RUN   TestFunctional/parallel/AddonsCmd
=== PAUSE TestFunctional/parallel/AddonsCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/AddonsCmd
functional_test.go:1695: (dbg) Run:  out/minikube-linux-amd64 -p functional-897621 addons list
functional_test.go:1707: (dbg) Run:  out/minikube-linux-amd64 -p functional-897621 addons list -o json
--- PASS: TestFunctional/parallel/AddonsCmd (0.14s)

                                                
                                    
x
+
TestFunctional/parallel/PersistentVolumeClaim (51.77s)

                                                
                                                
=== RUN   TestFunctional/parallel/PersistentVolumeClaim
=== PAUSE TestFunctional/parallel/PersistentVolumeClaim

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/PersistentVolumeClaim
functional_test_pvc_test.go:50: (dbg) TestFunctional/parallel/PersistentVolumeClaim: waiting 4m0s for pods matching "integration-test=storage-provisioner" in namespace "kube-system" ...
helpers_test.go:352: "storage-provisioner" [5ac0bd3a-ca35-49ae-a509-cdc69fb58101] Running
functional_test_pvc_test.go:50: (dbg) TestFunctional/parallel/PersistentVolumeClaim: integration-test=storage-provisioner healthy within 6.003459089s
functional_test_pvc_test.go:55: (dbg) Run:  kubectl --context functional-897621 get storageclass -o=json
functional_test_pvc_test.go:75: (dbg) Run:  kubectl --context functional-897621 apply -f testdata/storage-provisioner/pvc.yaml
functional_test_pvc_test.go:82: (dbg) Run:  kubectl --context functional-897621 get pvc myclaim -o=json
I1018 11:39:42.359398    9909 retry.go:31] will retry after 1.300778304s: testpvc phase = "Pending", want "Bound" (msg={TypeMeta:{Kind:PersistentVolumeClaim APIVersion:v1} ObjectMeta:{Name:myclaim GenerateName: Namespace:default SelfLink: UID:b1ad8a21-55dd-45f7-89a1-b72dce581c2d ResourceVersion:801 Generation:0 CreationTimestamp:2025-10-18 11:39:42 +0000 UTC DeletionTimestamp:<nil> DeletionGracePeriodSeconds:<nil> Labels:map[] Annotations:map[kubectl.kubernetes.io/last-applied-configuration:{"apiVersion":"v1","kind":"PersistentVolumeClaim","metadata":{"annotations":{},"name":"myclaim","namespace":"default"},"spec":{"accessModes":["ReadWriteOnce"],"resources":{"requests":{"storage":"500Mi"}},"volumeMode":"Filesystem"}}
volume.beta.kubernetes.io/storage-provisioner:k8s.io/minikube-hostpath volume.kubernetes.io/storage-provisioner:k8s.io/minikube-hostpath] OwnerReferences:[] Finalizers:[kubernetes.io/pvc-protection] ManagedFields:[]} Spec:{AccessModes:[ReadWriteOnce] Selector:nil Resources:{Limits:map[] Requests:map[storage:{i:{value:524288000 scale:0} d:{Dec:<nil>} s:500Mi Format:BinarySI}]} VolumeName: StorageClassName:0xc001712b90 VolumeMode:0xc001712ba0 DataSource:nil DataSourceRef:nil VolumeAttributesClassName:<nil>} Status:{Phase:Pending AccessModes:[] Capacity:map[] Conditions:[] AllocatedResources:map[] AllocatedResourceStatuses:map[] CurrentVolumeAttributesClassName:<nil> ModifyVolumeStatus:nil}})
functional_test_pvc_test.go:82: (dbg) Run:  kubectl --context functional-897621 get pvc myclaim -o=json
functional_test_pvc_test.go:131: (dbg) Run:  kubectl --context functional-897621 apply -f testdata/storage-provisioner/pod.yaml
I1018 11:39:43.861655    9909 detect.go:223] nested VM detected
functional_test_pvc_test.go:140: (dbg) TestFunctional/parallel/PersistentVolumeClaim: waiting 6m0s for pods matching "test=storage-provisioner" in namespace "default" ...
helpers_test.go:352: "sp-pod" [d39176ed-a42b-4657-9549-243ab8a4ff8a] Pending
helpers_test.go:352: "sp-pod" [d39176ed-a42b-4657-9549-243ab8a4ff8a] Pending / Ready:ContainersNotReady (containers with unready status: [myfrontend]) / ContainersReady:ContainersNotReady (containers with unready status: [myfrontend])
helpers_test.go:352: "sp-pod" [d39176ed-a42b-4657-9549-243ab8a4ff8a] Running
functional_test_pvc_test.go:140: (dbg) TestFunctional/parallel/PersistentVolumeClaim: test=storage-provisioner healthy within 17.003881442s
functional_test_pvc_test.go:106: (dbg) Run:  kubectl --context functional-897621 exec sp-pod -- touch /tmp/mount/foo
functional_test_pvc_test.go:112: (dbg) Run:  kubectl --context functional-897621 delete -f testdata/storage-provisioner/pod.yaml
functional_test_pvc_test.go:112: (dbg) Done: kubectl --context functional-897621 delete -f testdata/storage-provisioner/pod.yaml: (1.542824126s)
functional_test_pvc_test.go:131: (dbg) Run:  kubectl --context functional-897621 apply -f testdata/storage-provisioner/pod.yaml
I1018 11:40:02.701892    9909 detect.go:223] nested VM detected
functional_test_pvc_test.go:140: (dbg) TestFunctional/parallel/PersistentVolumeClaim: waiting 6m0s for pods matching "test=storage-provisioner" in namespace "default" ...
helpers_test.go:352: "sp-pod" [82deadcc-df1c-4a53-908d-b1865b915350] Pending
helpers_test.go:352: "sp-pod" [82deadcc-df1c-4a53-908d-b1865b915350] Pending / Ready:ContainersNotReady (containers with unready status: [myfrontend]) / ContainersReady:ContainersNotReady (containers with unready status: [myfrontend])
helpers_test.go:352: "sp-pod" [82deadcc-df1c-4a53-908d-b1865b915350] Running
2025/10/18 11:40:22 [DEBUG] GET http://127.0.0.1:36195/api/v1/namespaces/kubernetes-dashboard/services/http:kubernetes-dashboard:/proxy/
functional_test_pvc_test.go:140: (dbg) TestFunctional/parallel/PersistentVolumeClaim: test=storage-provisioner healthy within 25.004844474s
functional_test_pvc_test.go:120: (dbg) Run:  kubectl --context functional-897621 exec sp-pod -- ls /tmp/mount
--- PASS: TestFunctional/parallel/PersistentVolumeClaim (51.77s)

                                                
                                    
x
+
TestFunctional/parallel/SSHCmd (0.46s)

                                                
                                                
=== RUN   TestFunctional/parallel/SSHCmd
=== PAUSE TestFunctional/parallel/SSHCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/SSHCmd
functional_test.go:1730: (dbg) Run:  out/minikube-linux-amd64 -p functional-897621 ssh "echo hello"
functional_test.go:1747: (dbg) Run:  out/minikube-linux-amd64 -p functional-897621 ssh "cat /etc/hostname"
--- PASS: TestFunctional/parallel/SSHCmd (0.46s)

                                                
                                    
x
+
TestFunctional/parallel/CpCmd (1.41s)

                                                
                                                
=== RUN   TestFunctional/parallel/CpCmd
=== PAUSE TestFunctional/parallel/CpCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/CpCmd
helpers_test.go:573: (dbg) Run:  out/minikube-linux-amd64 -p functional-897621 cp testdata/cp-test.txt /home/docker/cp-test.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p functional-897621 ssh -n functional-897621 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-amd64 -p functional-897621 cp functional-897621:/home/docker/cp-test.txt /tmp/TestFunctionalparallelCpCmd1170877301/001/cp-test.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p functional-897621 ssh -n functional-897621 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-amd64 -p functional-897621 cp testdata/cp-test.txt /tmp/does/not/exist/cp-test.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p functional-897621 ssh -n functional-897621 "sudo cat /tmp/does/not/exist/cp-test.txt"
--- PASS: TestFunctional/parallel/CpCmd (1.41s)

                                                
                                    
x
+
TestFunctional/parallel/MySQL (37.31s)

                                                
                                                
=== RUN   TestFunctional/parallel/MySQL
=== PAUSE TestFunctional/parallel/MySQL

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/MySQL
functional_test.go:1798: (dbg) Run:  kubectl --context functional-897621 replace --force -f testdata/mysql.yaml
functional_test.go:1804: (dbg) TestFunctional/parallel/MySQL: waiting 10m0s for pods matching "app=mysql" in namespace "default" ...
helpers_test.go:352: "mysql-5bb876957f-mrqh4" [13a7bf5b-4ff2-43e3-9026-6434e22a20d9] Pending / Ready:ContainersNotReady (containers with unready status: [mysql]) / ContainersReady:ContainersNotReady (containers with unready status: [mysql])
helpers_test.go:352: "mysql-5bb876957f-mrqh4" [13a7bf5b-4ff2-43e3-9026-6434e22a20d9] Running
functional_test.go:1804: (dbg) TestFunctional/parallel/MySQL: app=mysql healthy within 30.007109824s
functional_test.go:1812: (dbg) Run:  kubectl --context functional-897621 exec mysql-5bb876957f-mrqh4 -- mysql -ppassword -e "show databases;"
functional_test.go:1812: (dbg) Non-zero exit: kubectl --context functional-897621 exec mysql-5bb876957f-mrqh4 -- mysql -ppassword -e "show databases;": exit status 1 (256.0864ms)

                                                
                                                
** stderr ** 
	mysql: [Warning] Using a password on the command line interface can be insecure.
	ERROR 1045 (28000): Access denied for user 'root'@'localhost' (using password: YES)
	command terminated with exit code 1

                                                
                                                
** /stderr **
I1018 11:40:18.734219    9909 retry.go:31] will retry after 1.233226622s: exit status 1
functional_test.go:1812: (dbg) Run:  kubectl --context functional-897621 exec mysql-5bb876957f-mrqh4 -- mysql -ppassword -e "show databases;"
functional_test.go:1812: (dbg) Non-zero exit: kubectl --context functional-897621 exec mysql-5bb876957f-mrqh4 -- mysql -ppassword -e "show databases;": exit status 1 (419.305375ms)

                                                
                                                
** stderr ** 
	mysql: [Warning] Using a password on the command line interface can be insecure.
	ERROR 2002 (HY000): Can't connect to local MySQL server through socket '/var/run/mysqld/mysqld.sock' (2)
	command terminated with exit code 1

                                                
                                                
** /stderr **
I1018 11:40:20.387190    9909 retry.go:31] will retry after 2.157015352s: exit status 1
functional_test.go:1812: (dbg) Run:  kubectl --context functional-897621 exec mysql-5bb876957f-mrqh4 -- mysql -ppassword -e "show databases;"
functional_test.go:1812: (dbg) Non-zero exit: kubectl --context functional-897621 exec mysql-5bb876957f-mrqh4 -- mysql -ppassword -e "show databases;": exit status 1 (159.311081ms)

                                                
                                                
** stderr ** 
	mysql: [Warning] Using a password on the command line interface can be insecure.
	ERROR 2002 (HY000): Can't connect to local MySQL server through socket '/var/run/mysqld/mysqld.sock' (2)
	command terminated with exit code 1

                                                
                                                
** /stderr **
I1018 11:40:22.704605    9909 retry.go:31] will retry after 2.652451254s: exit status 1
functional_test.go:1812: (dbg) Run:  kubectl --context functional-897621 exec mysql-5bb876957f-mrqh4 -- mysql -ppassword -e "show databases;"
--- PASS: TestFunctional/parallel/MySQL (37.31s)

                                                
                                    
x
+
TestFunctional/parallel/FileSync (0.24s)

                                                
                                                
=== RUN   TestFunctional/parallel/FileSync
=== PAUSE TestFunctional/parallel/FileSync

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/FileSync
functional_test.go:1934: Checking for existence of /etc/test/nested/copy/9909/hosts within VM
functional_test.go:1936: (dbg) Run:  out/minikube-linux-amd64 -p functional-897621 ssh "sudo cat /etc/test/nested/copy/9909/hosts"
functional_test.go:1941: file sync test content: Test file for checking file sync process
--- PASS: TestFunctional/parallel/FileSync (0.24s)

                                                
                                    
x
+
TestFunctional/parallel/CertSync (1.29s)

                                                
                                                
=== RUN   TestFunctional/parallel/CertSync
=== PAUSE TestFunctional/parallel/CertSync

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/CertSync
functional_test.go:1977: Checking for existence of /etc/ssl/certs/9909.pem within VM
functional_test.go:1978: (dbg) Run:  out/minikube-linux-amd64 -p functional-897621 ssh "sudo cat /etc/ssl/certs/9909.pem"
functional_test.go:1977: Checking for existence of /usr/share/ca-certificates/9909.pem within VM
functional_test.go:1978: (dbg) Run:  out/minikube-linux-amd64 -p functional-897621 ssh "sudo cat /usr/share/ca-certificates/9909.pem"
functional_test.go:1977: Checking for existence of /etc/ssl/certs/51391683.0 within VM
functional_test.go:1978: (dbg) Run:  out/minikube-linux-amd64 -p functional-897621 ssh "sudo cat /etc/ssl/certs/51391683.0"
functional_test.go:2004: Checking for existence of /etc/ssl/certs/99092.pem within VM
functional_test.go:2005: (dbg) Run:  out/minikube-linux-amd64 -p functional-897621 ssh "sudo cat /etc/ssl/certs/99092.pem"
functional_test.go:2004: Checking for existence of /usr/share/ca-certificates/99092.pem within VM
functional_test.go:2005: (dbg) Run:  out/minikube-linux-amd64 -p functional-897621 ssh "sudo cat /usr/share/ca-certificates/99092.pem"
functional_test.go:2004: Checking for existence of /etc/ssl/certs/3ec20f2e.0 within VM
functional_test.go:2005: (dbg) Run:  out/minikube-linux-amd64 -p functional-897621 ssh "sudo cat /etc/ssl/certs/3ec20f2e.0"
--- PASS: TestFunctional/parallel/CertSync (1.29s)

                                                
                                    
x
+
TestFunctional/parallel/NodeLabels (0.09s)

                                                
                                                
=== RUN   TestFunctional/parallel/NodeLabels
=== PAUSE TestFunctional/parallel/NodeLabels

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/NodeLabels
functional_test.go:234: (dbg) Run:  kubectl --context functional-897621 get nodes --output=go-template "--template='{{range $k, $v := (index .items 0).metadata.labels}}{{$k}} {{end}}'"
--- PASS: TestFunctional/parallel/NodeLabels (0.09s)

                                                
                                    
x
+
TestFunctional/parallel/NonActiveRuntimeDisabled (0.21s)

                                                
                                                
=== RUN   TestFunctional/parallel/NonActiveRuntimeDisabled
=== PAUSE TestFunctional/parallel/NonActiveRuntimeDisabled

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/NonActiveRuntimeDisabled
functional_test.go:2032: (dbg) Run:  out/minikube-linux-amd64 -p functional-897621 ssh "sudo systemctl is-active crio"
functional_test.go:2032: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-897621 ssh "sudo systemctl is-active crio": exit status 1 (213.650042ms)

                                                
                                                
-- stdout --
	inactive

                                                
                                                
-- /stdout --
** stderr ** 
	ssh: Process exited with status 3

                                                
                                                
** /stderr **
--- PASS: TestFunctional/parallel/NonActiveRuntimeDisabled (0.21s)

                                                
                                    
x
+
TestFunctional/parallel/License (0.34s)

                                                
                                                
=== RUN   TestFunctional/parallel/License
=== PAUSE TestFunctional/parallel/License

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/License
functional_test.go:2293: (dbg) Run:  out/minikube-linux-amd64 license
--- PASS: TestFunctional/parallel/License (0.34s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/DeployApp (10.21s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/DeployApp
functional_test.go:1451: (dbg) Run:  kubectl --context functional-897621 create deployment hello-node --image kicbase/echo-server
functional_test.go:1455: (dbg) Run:  kubectl --context functional-897621 expose deployment hello-node --type=NodePort --port=8080
functional_test.go:1460: (dbg) TestFunctional/parallel/ServiceCmd/DeployApp: waiting 10m0s for pods matching "app=hello-node" in namespace "default" ...
helpers_test.go:352: "hello-node-75c85bcc94-vz6kr" [93cdf372-0b74-4ed5-86a0-b12d12c7f462] Pending / Ready:ContainersNotReady (containers with unready status: [echo-server]) / ContainersReady:ContainersNotReady (containers with unready status: [echo-server])
helpers_test.go:352: "hello-node-75c85bcc94-vz6kr" [93cdf372-0b74-4ed5-86a0-b12d12c7f462] Running
functional_test.go:1460: (dbg) TestFunctional/parallel/ServiceCmd/DeployApp: app=hello-node healthy within 10.006484594s
--- PASS: TestFunctional/parallel/ServiceCmd/DeployApp (10.21s)

                                                
                                    
x
+
TestFunctional/parallel/Version/short (0.06s)

                                                
                                                
=== RUN   TestFunctional/parallel/Version/short
=== PAUSE TestFunctional/parallel/Version/short

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/Version/short
functional_test.go:2261: (dbg) Run:  out/minikube-linux-amd64 -p functional-897621 version --short
--- PASS: TestFunctional/parallel/Version/short (0.06s)

                                                
                                    
x
+
TestFunctional/parallel/Version/components (0.53s)

                                                
                                                
=== RUN   TestFunctional/parallel/Version/components
=== PAUSE TestFunctional/parallel/Version/components

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/Version/components
functional_test.go:2275: (dbg) Run:  out/minikube-linux-amd64 -p functional-897621 version -o=json --components
--- PASS: TestFunctional/parallel/Version/components (0.53s)

                                                
                                    
x
+
TestFunctional/parallel/DockerEnv/bash (0.83s)

                                                
                                                
=== RUN   TestFunctional/parallel/DockerEnv/bash
functional_test.go:514: (dbg) Run:  /bin/bash -c "eval $(out/minikube-linux-amd64 -p functional-897621 docker-env) && out/minikube-linux-amd64 status -p functional-897621"
functional_test.go:537: (dbg) Run:  /bin/bash -c "eval $(out/minikube-linux-amd64 -p functional-897621 docker-env) && docker images"
--- PASS: TestFunctional/parallel/DockerEnv/bash (0.83s)

                                                
                                    
x
+
TestFunctional/parallel/UpdateContextCmd/no_changes (0.1s)

                                                
                                                
=== RUN   TestFunctional/parallel/UpdateContextCmd/no_changes
=== PAUSE TestFunctional/parallel/UpdateContextCmd/no_changes

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/UpdateContextCmd/no_changes
functional_test.go:2124: (dbg) Run:  out/minikube-linux-amd64 -p functional-897621 update-context --alsologtostderr -v=2
--- PASS: TestFunctional/parallel/UpdateContextCmd/no_changes (0.10s)

                                                
                                    
x
+
TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster (0.1s)

                                                
                                                
=== RUN   TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster
=== PAUSE TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster
functional_test.go:2124: (dbg) Run:  out/minikube-linux-amd64 -p functional-897621 update-context --alsologtostderr -v=2
--- PASS: TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster (0.10s)

                                                
                                    
x
+
TestFunctional/parallel/UpdateContextCmd/no_clusters (0.09s)

                                                
                                                
=== RUN   TestFunctional/parallel/UpdateContextCmd/no_clusters
=== PAUSE TestFunctional/parallel/UpdateContextCmd/no_clusters

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/UpdateContextCmd/no_clusters
functional_test.go:2124: (dbg) Run:  out/minikube-linux-amd64 -p functional-897621 update-context --alsologtostderr -v=2
--- PASS: TestFunctional/parallel/UpdateContextCmd/no_clusters (0.09s)

                                                
                                    
x
+
TestFunctional/parallel/ProfileCmd/profile_not_create (0.35s)

                                                
                                                
=== RUN   TestFunctional/parallel/ProfileCmd/profile_not_create
functional_test.go:1285: (dbg) Run:  out/minikube-linux-amd64 profile lis
functional_test.go:1290: (dbg) Run:  out/minikube-linux-amd64 profile list --output json
--- PASS: TestFunctional/parallel/ProfileCmd/profile_not_create (0.35s)

                                                
                                    
x
+
TestFunctional/parallel/ProfileCmd/profile_list (0.33s)

                                                
                                                
=== RUN   TestFunctional/parallel/ProfileCmd/profile_list
functional_test.go:1325: (dbg) Run:  out/minikube-linux-amd64 profile list
functional_test.go:1330: Took "281.366479ms" to run "out/minikube-linux-amd64 profile list"
functional_test.go:1339: (dbg) Run:  out/minikube-linux-amd64 profile list -l
functional_test.go:1344: Took "49.047315ms" to run "out/minikube-linux-amd64 profile list -l"
--- PASS: TestFunctional/parallel/ProfileCmd/profile_list (0.33s)

                                                
                                    
x
+
TestFunctional/parallel/ProfileCmd/profile_json_output (0.33s)

                                                
                                                
=== RUN   TestFunctional/parallel/ProfileCmd/profile_json_output
functional_test.go:1376: (dbg) Run:  out/minikube-linux-amd64 profile list -o json
functional_test.go:1381: Took "280.925843ms" to run "out/minikube-linux-amd64 profile list -o json"
functional_test.go:1389: (dbg) Run:  out/minikube-linux-amd64 profile list -o json --light
functional_test.go:1394: Took "47.646674ms" to run "out/minikube-linux-amd64 profile list -o json --light"
--- PASS: TestFunctional/parallel/ProfileCmd/profile_json_output (0.33s)

                                                
                                    
x
+
TestFunctional/parallel/MountCmd/any-port (8.59s)

                                                
                                                
=== RUN   TestFunctional/parallel/MountCmd/any-port
functional_test_mount_test.go:73: (dbg) daemon: [out/minikube-linux-amd64 mount -p functional-897621 /tmp/TestFunctionalparallelMountCmdany-port3367422423/001:/mount-9p --alsologtostderr -v=1]
functional_test_mount_test.go:107: wrote "test-1760787581531252761" to /tmp/TestFunctionalparallelMountCmdany-port3367422423/001/created-by-test
functional_test_mount_test.go:107: wrote "test-1760787581531252761" to /tmp/TestFunctionalparallelMountCmdany-port3367422423/001/created-by-test-removed-by-pod
functional_test_mount_test.go:107: wrote "test-1760787581531252761" to /tmp/TestFunctionalparallelMountCmdany-port3367422423/001/test-1760787581531252761
functional_test_mount_test.go:115: (dbg) Run:  out/minikube-linux-amd64 -p functional-897621 ssh "findmnt -T /mount-9p | grep 9p"
functional_test_mount_test.go:115: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-897621 ssh "findmnt -T /mount-9p | grep 9p": exit status 1 (189.433666ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
I1018 11:39:41.720998    9909 retry.go:31] will retry after 464.324858ms: exit status 1
functional_test_mount_test.go:115: (dbg) Run:  out/minikube-linux-amd64 -p functional-897621 ssh "findmnt -T /mount-9p | grep 9p"
functional_test_mount_test.go:129: (dbg) Run:  out/minikube-linux-amd64 -p functional-897621 ssh -- ls -la /mount-9p
functional_test_mount_test.go:133: guest mount directory contents
total 2
-rw-r--r-- 1 docker docker 24 Oct 18 11:39 created-by-test
-rw-r--r-- 1 docker docker 24 Oct 18 11:39 created-by-test-removed-by-pod
-rw-r--r-- 1 docker docker 24 Oct 18 11:39 test-1760787581531252761
functional_test_mount_test.go:137: (dbg) Run:  out/minikube-linux-amd64 -p functional-897621 ssh cat /mount-9p/test-1760787581531252761
functional_test_mount_test.go:148: (dbg) Run:  kubectl --context functional-897621 replace --force -f testdata/busybox-mount-test.yaml
functional_test_mount_test.go:153: (dbg) TestFunctional/parallel/MountCmd/any-port: waiting 4m0s for pods matching "integration-test=busybox-mount" in namespace "default" ...
helpers_test.go:352: "busybox-mount" [93cd1b38-048d-40bd-b811-72a8564d005e] Pending
helpers_test.go:352: "busybox-mount" [93cd1b38-048d-40bd-b811-72a8564d005e] Pending / Ready:ContainersNotReady (containers with unready status: [mount-munger]) / ContainersReady:ContainersNotReady (containers with unready status: [mount-munger])
helpers_test.go:352: "busybox-mount" [93cd1b38-048d-40bd-b811-72a8564d005e] Pending / Initialized:PodCompleted / Ready:PodCompleted / ContainersReady:PodCompleted
helpers_test.go:352: "busybox-mount" [93cd1b38-048d-40bd-b811-72a8564d005e] Succeeded / Initialized:PodCompleted / Ready:PodCompleted / ContainersReady:PodCompleted
functional_test_mount_test.go:153: (dbg) TestFunctional/parallel/MountCmd/any-port: integration-test=busybox-mount healthy within 6.003916396s
functional_test_mount_test.go:169: (dbg) Run:  kubectl --context functional-897621 logs busybox-mount
functional_test_mount_test.go:181: (dbg) Run:  out/minikube-linux-amd64 -p functional-897621 ssh stat /mount-9p/created-by-test
functional_test_mount_test.go:181: (dbg) Run:  out/minikube-linux-amd64 -p functional-897621 ssh stat /mount-9p/created-by-pod
functional_test_mount_test.go:90: (dbg) Run:  out/minikube-linux-amd64 -p functional-897621 ssh "sudo umount -f /mount-9p"
functional_test_mount_test.go:94: (dbg) stopping [out/minikube-linux-amd64 mount -p functional-897621 /tmp/TestFunctionalparallelMountCmdany-port3367422423/001:/mount-9p --alsologtostderr -v=1] ...
--- PASS: TestFunctional/parallel/MountCmd/any-port (8.59s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/List (0.33s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/List
functional_test.go:1469: (dbg) Run:  out/minikube-linux-amd64 -p functional-897621 service list
--- PASS: TestFunctional/parallel/ServiceCmd/List (0.33s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageListShort (0.25s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageListShort
=== PAUSE TestFunctional/parallel/ImageCommands/ImageListShort

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageListShort
functional_test.go:276: (dbg) Run:  out/minikube-linux-amd64 -p functional-897621 image ls --format short --alsologtostderr
functional_test.go:281: (dbg) Stdout: out/minikube-linux-amd64 -p functional-897621 image ls --format short --alsologtostderr:
registry.k8s.io/pause:latest
registry.k8s.io/pause:3.3
registry.k8s.io/pause:3.10.1
registry.k8s.io/pause:3.1
registry.k8s.io/kube-scheduler:v1.34.1
registry.k8s.io/kube-proxy:v1.34.1
registry.k8s.io/kube-controller-manager:v1.34.1
registry.k8s.io/kube-apiserver:v1.34.1
registry.k8s.io/etcd:3.6.4-0
registry.k8s.io/coredns/coredns:v1.12.1
gcr.io/k8s-minikube/storage-provisioner:v5
gcr.io/k8s-minikube/busybox:1.28.4-glibc
docker.io/library/nginx:latest
docker.io/library/minikube-local-cache-test:functional-897621
docker.io/kicbase/echo-server:latest
docker.io/kicbase/echo-server:functional-897621
functional_test.go:284: (dbg) Stderr: out/minikube-linux-amd64 -p functional-897621 image ls --format short --alsologtostderr:
I1018 11:39:54.427775   18583 out.go:360] Setting OutFile to fd 1 ...
I1018 11:39:54.427924   18583 out.go:408] TERM=,COLORTERM=, which probably does not support color
I1018 11:39:54.427931   18583 out.go:374] Setting ErrFile to fd 2...
I1018 11:39:54.427936   18583 out.go:408] TERM=,COLORTERM=, which probably does not support color
I1018 11:39:54.428151   18583 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21647-6010/.minikube/bin
I1018 11:39:54.428870   18583 config.go:182] Loaded profile config "functional-897621": Driver=kvm2, ContainerRuntime=docker, KubernetesVersion=v1.34.1
I1018 11:39:54.428987   18583 config.go:182] Loaded profile config "functional-897621": Driver=kvm2, ContainerRuntime=docker, KubernetesVersion=v1.34.1
I1018 11:39:54.429392   18583 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_integration/out/docker-machine-driver-kvm2
I1018 11:39:54.429455   18583 main.go:141] libmachine: Launching plugin server for driver kvm2
I1018 11:39:54.443119   18583 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:46179
I1018 11:39:54.443662   18583 main.go:141] libmachine: () Calling .GetVersion
I1018 11:39:54.444239   18583 main.go:141] libmachine: Using API Version  1
I1018 11:39:54.444273   18583 main.go:141] libmachine: () Calling .SetConfigRaw
I1018 11:39:54.444614   18583 main.go:141] libmachine: () Calling .GetMachineName
I1018 11:39:54.444800   18583 main.go:141] libmachine: (functional-897621) Calling .GetState
I1018 11:39:54.447185   18583 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_integration/out/docker-machine-driver-kvm2
I1018 11:39:54.447230   18583 main.go:141] libmachine: Launching plugin server for driver kvm2
I1018 11:39:54.460987   18583 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:33493
I1018 11:39:54.461448   18583 main.go:141] libmachine: () Calling .GetVersion
I1018 11:39:54.461924   18583 main.go:141] libmachine: Using API Version  1
I1018 11:39:54.461954   18583 main.go:141] libmachine: () Calling .SetConfigRaw
I1018 11:39:54.462360   18583 main.go:141] libmachine: () Calling .GetMachineName
I1018 11:39:54.462612   18583 main.go:141] libmachine: (functional-897621) Calling .DriverName
I1018 11:39:54.462862   18583 ssh_runner.go:195] Run: systemctl --version
I1018 11:39:54.462892   18583 main.go:141] libmachine: (functional-897621) Calling .GetSSHHostname
I1018 11:39:54.466213   18583 main.go:141] libmachine: (functional-897621) DBG | domain functional-897621 has defined MAC address 52:54:00:80:f0:20 in network mk-functional-897621
I1018 11:39:54.466675   18583 main.go:141] libmachine: (functional-897621) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:80:f0:20", ip: ""} in network mk-functional-897621: {Iface:virbr1 ExpiryTime:2025-10-18 12:36:08 +0000 UTC Type:0 Mac:52:54:00:80:f0:20 Iaid: IPaddr:192.168.39.228 Prefix:24 Hostname:functional-897621 Clientid:01:52:54:00:80:f0:20}
I1018 11:39:54.466702   18583 main.go:141] libmachine: (functional-897621) DBG | domain functional-897621 has defined IP address 192.168.39.228 and MAC address 52:54:00:80:f0:20 in network mk-functional-897621
I1018 11:39:54.466892   18583 main.go:141] libmachine: (functional-897621) Calling .GetSSHPort
I1018 11:39:54.467124   18583 main.go:141] libmachine: (functional-897621) Calling .GetSSHKeyPath
I1018 11:39:54.467320   18583 main.go:141] libmachine: (functional-897621) Calling .GetSSHUsername
I1018 11:39:54.467473   18583 sshutil.go:53] new ssh client: &{IP:192.168.39.228 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/21647-6010/.minikube/machines/functional-897621/id_rsa Username:docker}
I1018 11:39:54.572024   18583 ssh_runner.go:195] Run: docker images --no-trunc --format "{{json .}}"
I1018 11:39:54.625267   18583 main.go:141] libmachine: Making call to close driver server
I1018 11:39:54.625280   18583 main.go:141] libmachine: (functional-897621) Calling .Close
I1018 11:39:54.625574   18583 main.go:141] libmachine: (functional-897621) DBG | Closing plugin on server side
I1018 11:39:54.625596   18583 main.go:141] libmachine: Successfully made call to close driver server
I1018 11:39:54.625627   18583 main.go:141] libmachine: Making call to close connection to plugin binary
I1018 11:39:54.625646   18583 main.go:141] libmachine: Making call to close driver server
I1018 11:39:54.625658   18583 main.go:141] libmachine: (functional-897621) Calling .Close
I1018 11:39:54.625970   18583 main.go:141] libmachine: Successfully made call to close driver server
I1018 11:39:54.626010   18583 main.go:141] libmachine: Making call to close connection to plugin binary
I1018 11:39:54.626011   18583 main.go:141] libmachine: (functional-897621) DBG | Closing plugin on server side
--- PASS: TestFunctional/parallel/ImageCommands/ImageListShort (0.25s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageListTable (0.22s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageListTable
=== PAUSE TestFunctional/parallel/ImageCommands/ImageListTable

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageListTable
functional_test.go:276: (dbg) Run:  out/minikube-linux-amd64 -p functional-897621 image ls --format table --alsologtostderr
functional_test.go:281: (dbg) Stdout: out/minikube-linux-amd64 -p functional-897621 image ls --format table --alsologtostderr:
┌─────────────────────────────────────────────┬───────────────────┬───────────────┬────────┐
│                    IMAGE                    │        TAG        │   IMAGE ID    │  SIZE  │
├─────────────────────────────────────────────┼───────────────────┼───────────────┼────────┤
│ docker.io/library/minikube-local-cache-test │ functional-897621 │ 9d6fbaa0e3734 │ 30B    │
│ docker.io/library/nginx                     │ latest            │ 07ccdb7838758 │ 160MB  │
│ registry.k8s.io/kube-apiserver              │ v1.34.1           │ c3994bc696102 │ 88MB   │
│ docker.io/kicbase/echo-server               │ functional-897621 │ 9056ab77afb8e │ 4.94MB │
│ docker.io/kicbase/echo-server               │ latest            │ 9056ab77afb8e │ 4.94MB │
│ registry.k8s.io/pause                       │ 3.3               │ 0184c1613d929 │ 683kB  │
│ registry.k8s.io/pause                       │ 3.1               │ da86e6ba6ca19 │ 742kB  │
│ localhost/my-image                          │ functional-897621 │ ae8196c40753f │ 1.24MB │
│ registry.k8s.io/etcd                        │ 3.6.4-0           │ 5f1f5298c888d │ 195MB  │
│ registry.k8s.io/pause                       │ 3.10.1            │ cd073f4c5f6a8 │ 736kB  │
│ registry.k8s.io/pause                       │ latest            │ 350b164e7ae1d │ 240kB  │
│ registry.k8s.io/kube-scheduler              │ v1.34.1           │ 7dd6aaa1717ab │ 52.8MB │
│ registry.k8s.io/coredns/coredns             │ v1.12.1           │ 52546a367cc9e │ 75MB   │
│ gcr.io/k8s-minikube/storage-provisioner     │ v5                │ 6e38f40d628db │ 31.5MB │
│ gcr.io/k8s-minikube/busybox                 │ 1.28.4-glibc      │ 56cc512116c8f │ 4.4MB  │
│ registry.k8s.io/kube-controller-manager     │ v1.34.1           │ c80c8dbafe7dd │ 74.9MB │
│ registry.k8s.io/kube-proxy                  │ v1.34.1           │ fc25172553d79 │ 71.9MB │
└─────────────────────────────────────────────┴───────────────────┴───────────────┴────────┘
functional_test.go:284: (dbg) Stderr: out/minikube-linux-amd64 -p functional-897621 image ls --format table --alsologtostderr:
I1018 11:40:00.058842   18757 out.go:360] Setting OutFile to fd 1 ...
I1018 11:40:00.059134   18757 out.go:408] TERM=,COLORTERM=, which probably does not support color
I1018 11:40:00.059145   18757 out.go:374] Setting ErrFile to fd 2...
I1018 11:40:00.059149   18757 out.go:408] TERM=,COLORTERM=, which probably does not support color
I1018 11:40:00.059389   18757 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21647-6010/.minikube/bin
I1018 11:40:00.060004   18757 config.go:182] Loaded profile config "functional-897621": Driver=kvm2, ContainerRuntime=docker, KubernetesVersion=v1.34.1
I1018 11:40:00.060125   18757 config.go:182] Loaded profile config "functional-897621": Driver=kvm2, ContainerRuntime=docker, KubernetesVersion=v1.34.1
I1018 11:40:00.060540   18757 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_integration/out/docker-machine-driver-kvm2
I1018 11:40:00.060614   18757 main.go:141] libmachine: Launching plugin server for driver kvm2
I1018 11:40:00.074340   18757 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:35991
I1018 11:40:00.074943   18757 main.go:141] libmachine: () Calling .GetVersion
I1018 11:40:00.075575   18757 main.go:141] libmachine: Using API Version  1
I1018 11:40:00.075592   18757 main.go:141] libmachine: () Calling .SetConfigRaw
I1018 11:40:00.075931   18757 main.go:141] libmachine: () Calling .GetMachineName
I1018 11:40:00.076153   18757 main.go:141] libmachine: (functional-897621) Calling .GetState
I1018 11:40:00.078692   18757 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_integration/out/docker-machine-driver-kvm2
I1018 11:40:00.078748   18757 main.go:141] libmachine: Launching plugin server for driver kvm2
I1018 11:40:00.092362   18757 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:36387
I1018 11:40:00.093031   18757 main.go:141] libmachine: () Calling .GetVersion
I1018 11:40:00.093568   18757 main.go:141] libmachine: Using API Version  1
I1018 11:40:00.093600   18757 main.go:141] libmachine: () Calling .SetConfigRaw
I1018 11:40:00.093962   18757 main.go:141] libmachine: () Calling .GetMachineName
I1018 11:40:00.094222   18757 main.go:141] libmachine: (functional-897621) Calling .DriverName
I1018 11:40:00.094522   18757 ssh_runner.go:195] Run: systemctl --version
I1018 11:40:00.094551   18757 main.go:141] libmachine: (functional-897621) Calling .GetSSHHostname
I1018 11:40:00.098035   18757 main.go:141] libmachine: (functional-897621) DBG | domain functional-897621 has defined MAC address 52:54:00:80:f0:20 in network mk-functional-897621
I1018 11:40:00.098626   18757 main.go:141] libmachine: (functional-897621) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:80:f0:20", ip: ""} in network mk-functional-897621: {Iface:virbr1 ExpiryTime:2025-10-18 12:36:08 +0000 UTC Type:0 Mac:52:54:00:80:f0:20 Iaid: IPaddr:192.168.39.228 Prefix:24 Hostname:functional-897621 Clientid:01:52:54:00:80:f0:20}
I1018 11:40:00.098659   18757 main.go:141] libmachine: (functional-897621) DBG | domain functional-897621 has defined IP address 192.168.39.228 and MAC address 52:54:00:80:f0:20 in network mk-functional-897621
I1018 11:40:00.098860   18757 main.go:141] libmachine: (functional-897621) Calling .GetSSHPort
I1018 11:40:00.099114   18757 main.go:141] libmachine: (functional-897621) Calling .GetSSHKeyPath
I1018 11:40:00.099314   18757 main.go:141] libmachine: (functional-897621) Calling .GetSSHUsername
I1018 11:40:00.099500   18757 sshutil.go:53] new ssh client: &{IP:192.168.39.228 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/21647-6010/.minikube/machines/functional-897621/id_rsa Username:docker}
I1018 11:40:00.195162   18757 ssh_runner.go:195] Run: docker images --no-trunc --format "{{json .}}"
I1018 11:40:00.221605   18757 main.go:141] libmachine: Making call to close driver server
I1018 11:40:00.221630   18757 main.go:141] libmachine: (functional-897621) Calling .Close
I1018 11:40:00.222012   18757 main.go:141] libmachine: Successfully made call to close driver server
I1018 11:40:00.222033   18757 main.go:141] libmachine: Making call to close connection to plugin binary
I1018 11:40:00.222043   18757 main.go:141] libmachine: Making call to close driver server
I1018 11:40:00.222049   18757 main.go:141] libmachine: (functional-897621) Calling .Close
I1018 11:40:00.222338   18757 main.go:141] libmachine: Successfully made call to close driver server
I1018 11:40:00.222356   18757 main.go:141] libmachine: Making call to close connection to plugin binary
I1018 11:40:00.222372   18757 main.go:141] libmachine: (functional-897621) DBG | Closing plugin on server side
--- PASS: TestFunctional/parallel/ImageCommands/ImageListTable (0.22s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageListJson (0.22s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageListJson
=== PAUSE TestFunctional/parallel/ImageCommands/ImageListJson

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageListJson
functional_test.go:276: (dbg) Run:  out/minikube-linux-amd64 -p functional-897621 image ls --format json --alsologtostderr
functional_test.go:281: (dbg) Stdout: out/minikube-linux-amd64 -p functional-897621 image ls --format json --alsologtostderr:
[{"id":"c3994bc6961024917ec0aeee02e62828108c21a52d87648e30f3080d9cbadc97","repoDigests":[],"repoTags":["registry.k8s.io/kube-apiserver:v1.34.1"],"size":"88000000"},{"id":"da86e6ba6ca197bf6bc5e9d900febd906b133eaa4750e6bed647b0fbe50ed43e","repoDigests":[],"repoTags":["registry.k8s.io/pause:3.1"],"size":"742000"},{"id":"ae8196c40753f1edb14fcf18acc922f61ff510a5c3c4651404e4f7763c27de35","repoDigests":[],"repoTags":["localhost/my-image:functional-897621"],"size":"1240000"},{"id":"9d6fbaa0e373406b8ac16bec44772251760cc95fbdf8ea399d627941935d5ade","repoDigests":[],"repoTags":["docker.io/library/minikube-local-cache-test:functional-897621"],"size":"30"},{"id":"5f1f5298c888daa46c4409ff4cefe5ca9d16e479419f94cdb5f5d5563dac0115","repoDigests":[],"repoTags":["registry.k8s.io/etcd:3.6.4-0"],"size":"195000000"},{"id":"56cc512116c8f894f11ce1995460aef1ee0972d48bc2a3bdb1faaac7c020289c","repoDigests":[],"repoTags":["gcr.io/k8s-minikube/busybox:1.28.4-glibc"],"size":"4400000"},{"id":"07ccdb7838758e758a4d52a9761636c385125a3
27355c0c94a6acff9babff938","repoDigests":[],"repoTags":["docker.io/library/nginx:latest"],"size":"160000000"},{"id":"c80c8dbafe7dd71fc21527912a6dd20ccd1b71f3e561a5c28337388d0619538f","repoDigests":[],"repoTags":["registry.k8s.io/kube-controller-manager:v1.34.1"],"size":"74900000"},{"id":"7dd6aaa1717ab7eaae4578503e4c4d9965fcf5a249e8155fe16379ee9b6cb813","repoDigests":[],"repoTags":["registry.k8s.io/kube-scheduler:v1.34.1"],"size":"52800000"},{"id":"fc25172553d79197ecd840ec8dba1fba68330079355e974b04c1a441e6a4a0b7","repoDigests":[],"repoTags":["registry.k8s.io/kube-proxy:v1.34.1"],"size":"71900000"},{"id":"6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562","repoDigests":[],"repoTags":["gcr.io/k8s-minikube/storage-provisioner:v5"],"size":"31500000"},{"id":"0184c1613d92931126feb4c548e5da11015513b9e4c104e7305ee8b53b50a9da","repoDigests":[],"repoTags":["registry.k8s.io/pause:3.3"],"size":"683000"},{"id":"cd073f4c5f6a8e9dc6f3125ba00cf60819cae95c1ec84a1f146ee4a9cf9e803f","repoDigests":[],"repoTags":["re
gistry.k8s.io/pause:3.10.1"],"size":"736000"},{"id":"52546a367cc9e0d924aa3b190596a9167fa6e53245023b5b5baf0f07e5443969","repoDigests":[],"repoTags":["registry.k8s.io/coredns/coredns:v1.12.1"],"size":"75000000"},{"id":"9056ab77afb8e18e04303f11000a9d31b3f16b74c59475b899ae1b342d328d30","repoDigests":[],"repoTags":["docker.io/kicbase/echo-server:functional-897621","docker.io/kicbase/echo-server:latest"],"size":"4940000"},{"id":"350b164e7ae1dcddeffadd65c76226c9b6dc5553f5179153fb0e36b78f2a5e06","repoDigests":[],"repoTags":["registry.k8s.io/pause:latest"],"size":"240000"}]
functional_test.go:284: (dbg) Stderr: out/minikube-linux-amd64 -p functional-897621 image ls --format json --alsologtostderr:
I1018 11:39:59.837983   18733 out.go:360] Setting OutFile to fd 1 ...
I1018 11:39:59.838255   18733 out.go:408] TERM=,COLORTERM=, which probably does not support color
I1018 11:39:59.838272   18733 out.go:374] Setting ErrFile to fd 2...
I1018 11:39:59.838276   18733 out.go:408] TERM=,COLORTERM=, which probably does not support color
I1018 11:39:59.838513   18733 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21647-6010/.minikube/bin
I1018 11:39:59.839086   18733 config.go:182] Loaded profile config "functional-897621": Driver=kvm2, ContainerRuntime=docker, KubernetesVersion=v1.34.1
I1018 11:39:59.839187   18733 config.go:182] Loaded profile config "functional-897621": Driver=kvm2, ContainerRuntime=docker, KubernetesVersion=v1.34.1
I1018 11:39:59.839562   18733 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_integration/out/docker-machine-driver-kvm2
I1018 11:39:59.839622   18733 main.go:141] libmachine: Launching plugin server for driver kvm2
I1018 11:39:59.853207   18733 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:38167
I1018 11:39:59.853927   18733 main.go:141] libmachine: () Calling .GetVersion
I1018 11:39:59.854639   18733 main.go:141] libmachine: Using API Version  1
I1018 11:39:59.854678   18733 main.go:141] libmachine: () Calling .SetConfigRaw
I1018 11:39:59.855065   18733 main.go:141] libmachine: () Calling .GetMachineName
I1018 11:39:59.855246   18733 main.go:141] libmachine: (functional-897621) Calling .GetState
I1018 11:39:59.857658   18733 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_integration/out/docker-machine-driver-kvm2
I1018 11:39:59.857700   18733 main.go:141] libmachine: Launching plugin server for driver kvm2
I1018 11:39:59.871954   18733 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:38063
I1018 11:39:59.872501   18733 main.go:141] libmachine: () Calling .GetVersion
I1018 11:39:59.873137   18733 main.go:141] libmachine: Using API Version  1
I1018 11:39:59.873168   18733 main.go:141] libmachine: () Calling .SetConfigRaw
I1018 11:39:59.873615   18733 main.go:141] libmachine: () Calling .GetMachineName
I1018 11:39:59.873821   18733 main.go:141] libmachine: (functional-897621) Calling .DriverName
I1018 11:39:59.874070   18733 ssh_runner.go:195] Run: systemctl --version
I1018 11:39:59.874095   18733 main.go:141] libmachine: (functional-897621) Calling .GetSSHHostname
I1018 11:39:59.877700   18733 main.go:141] libmachine: (functional-897621) DBG | domain functional-897621 has defined MAC address 52:54:00:80:f0:20 in network mk-functional-897621
I1018 11:39:59.878131   18733 main.go:141] libmachine: (functional-897621) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:80:f0:20", ip: ""} in network mk-functional-897621: {Iface:virbr1 ExpiryTime:2025-10-18 12:36:08 +0000 UTC Type:0 Mac:52:54:00:80:f0:20 Iaid: IPaddr:192.168.39.228 Prefix:24 Hostname:functional-897621 Clientid:01:52:54:00:80:f0:20}
I1018 11:39:59.878160   18733 main.go:141] libmachine: (functional-897621) DBG | domain functional-897621 has defined IP address 192.168.39.228 and MAC address 52:54:00:80:f0:20 in network mk-functional-897621
I1018 11:39:59.878429   18733 main.go:141] libmachine: (functional-897621) Calling .GetSSHPort
I1018 11:39:59.878627   18733 main.go:141] libmachine: (functional-897621) Calling .GetSSHKeyPath
I1018 11:39:59.878870   18733 main.go:141] libmachine: (functional-897621) Calling .GetSSHUsername
I1018 11:39:59.879107   18733 sshutil.go:53] new ssh client: &{IP:192.168.39.228 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/21647-6010/.minikube/machines/functional-897621/id_rsa Username:docker}
I1018 11:39:59.973087   18733 ssh_runner.go:195] Run: docker images --no-trunc --format "{{json .}}"
I1018 11:39:59.999651   18733 main.go:141] libmachine: Making call to close driver server
I1018 11:39:59.999667   18733 main.go:141] libmachine: (functional-897621) Calling .Close
I1018 11:39:59.999958   18733 main.go:141] libmachine: Successfully made call to close driver server
I1018 11:39:59.999982   18733 main.go:141] libmachine: Making call to close connection to plugin binary
I1018 11:39:59.999992   18733 main.go:141] libmachine: Making call to close driver server
I1018 11:39:59.999995   18733 main.go:141] libmachine: (functional-897621) DBG | Closing plugin on server side
I1018 11:40:00.000000   18733 main.go:141] libmachine: (functional-897621) Calling .Close
I1018 11:40:00.000414   18733 main.go:141] libmachine: Successfully made call to close driver server
I1018 11:40:00.000434   18733 main.go:141] libmachine: Making call to close connection to plugin binary
--- PASS: TestFunctional/parallel/ImageCommands/ImageListJson (0.22s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageListYaml (0.28s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageListYaml
=== PAUSE TestFunctional/parallel/ImageCommands/ImageListYaml

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageListYaml
functional_test.go:276: (dbg) Run:  out/minikube-linux-amd64 -p functional-897621 image ls --format yaml --alsologtostderr
functional_test.go:281: (dbg) Stdout: out/minikube-linux-amd64 -p functional-897621 image ls --format yaml --alsologtostderr:
- id: 07ccdb7838758e758a4d52a9761636c385125a327355c0c94a6acff9babff938
repoDigests: []
repoTags:
- docker.io/library/nginx:latest
size: "160000000"
- id: da86e6ba6ca197bf6bc5e9d900febd906b133eaa4750e6bed647b0fbe50ed43e
repoDigests: []
repoTags:
- registry.k8s.io/pause:3.1
size: "742000"
- id: c80c8dbafe7dd71fc21527912a6dd20ccd1b71f3e561a5c28337388d0619538f
repoDigests: []
repoTags:
- registry.k8s.io/kube-controller-manager:v1.34.1
size: "74900000"
- id: 5f1f5298c888daa46c4409ff4cefe5ca9d16e479419f94cdb5f5d5563dac0115
repoDigests: []
repoTags:
- registry.k8s.io/etcd:3.6.4-0
size: "195000000"
- id: 6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562
repoDigests: []
repoTags:
- gcr.io/k8s-minikube/storage-provisioner:v5
size: "31500000"
- id: 350b164e7ae1dcddeffadd65c76226c9b6dc5553f5179153fb0e36b78f2a5e06
repoDigests: []
repoTags:
- registry.k8s.io/pause:latest
size: "240000"
- id: c3994bc6961024917ec0aeee02e62828108c21a52d87648e30f3080d9cbadc97
repoDigests: []
repoTags:
- registry.k8s.io/kube-apiserver:v1.34.1
size: "88000000"
- id: fc25172553d79197ecd840ec8dba1fba68330079355e974b04c1a441e6a4a0b7
repoDigests: []
repoTags:
- registry.k8s.io/kube-proxy:v1.34.1
size: "71900000"
- id: 52546a367cc9e0d924aa3b190596a9167fa6e53245023b5b5baf0f07e5443969
repoDigests: []
repoTags:
- registry.k8s.io/coredns/coredns:v1.12.1
size: "75000000"
- id: 9056ab77afb8e18e04303f11000a9d31b3f16b74c59475b899ae1b342d328d30
repoDigests: []
repoTags:
- docker.io/kicbase/echo-server:functional-897621
- docker.io/kicbase/echo-server:latest
size: "4940000"
- id: 9d6fbaa0e373406b8ac16bec44772251760cc95fbdf8ea399d627941935d5ade
repoDigests: []
repoTags:
- docker.io/library/minikube-local-cache-test:functional-897621
size: "30"
- id: 7dd6aaa1717ab7eaae4578503e4c4d9965fcf5a249e8155fe16379ee9b6cb813
repoDigests: []
repoTags:
- registry.k8s.io/kube-scheduler:v1.34.1
size: "52800000"
- id: cd073f4c5f6a8e9dc6f3125ba00cf60819cae95c1ec84a1f146ee4a9cf9e803f
repoDigests: []
repoTags:
- registry.k8s.io/pause:3.10.1
size: "736000"
- id: 0184c1613d92931126feb4c548e5da11015513b9e4c104e7305ee8b53b50a9da
repoDigests: []
repoTags:
- registry.k8s.io/pause:3.3
size: "683000"
- id: 56cc512116c8f894f11ce1995460aef1ee0972d48bc2a3bdb1faaac7c020289c
repoDigests: []
repoTags:
- gcr.io/k8s-minikube/busybox:1.28.4-glibc
size: "4400000"

                                                
                                                
functional_test.go:284: (dbg) Stderr: out/minikube-linux-amd64 -p functional-897621 image ls --format yaml --alsologtostderr:
I1018 11:39:54.681060   18607 out.go:360] Setting OutFile to fd 1 ...
I1018 11:39:54.681316   18607 out.go:408] TERM=,COLORTERM=, which probably does not support color
I1018 11:39:54.681329   18607 out.go:374] Setting ErrFile to fd 2...
I1018 11:39:54.681335   18607 out.go:408] TERM=,COLORTERM=, which probably does not support color
I1018 11:39:54.681544   18607 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21647-6010/.minikube/bin
I1018 11:39:54.682123   18607 config.go:182] Loaded profile config "functional-897621": Driver=kvm2, ContainerRuntime=docker, KubernetesVersion=v1.34.1
I1018 11:39:54.682216   18607 config.go:182] Loaded profile config "functional-897621": Driver=kvm2, ContainerRuntime=docker, KubernetesVersion=v1.34.1
I1018 11:39:54.682607   18607 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_integration/out/docker-machine-driver-kvm2
I1018 11:39:54.682663   18607 main.go:141] libmachine: Launching plugin server for driver kvm2
I1018 11:39:54.696539   18607 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:33757
I1018 11:39:54.697111   18607 main.go:141] libmachine: () Calling .GetVersion
I1018 11:39:54.697781   18607 main.go:141] libmachine: Using API Version  1
I1018 11:39:54.697815   18607 main.go:141] libmachine: () Calling .SetConfigRaw
I1018 11:39:54.698189   18607 main.go:141] libmachine: () Calling .GetMachineName
I1018 11:39:54.698390   18607 main.go:141] libmachine: (functional-897621) Calling .GetState
I1018 11:39:54.700570   18607 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_integration/out/docker-machine-driver-kvm2
I1018 11:39:54.700609   18607 main.go:141] libmachine: Launching plugin server for driver kvm2
I1018 11:39:54.714387   18607 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:36147
I1018 11:39:54.714853   18607 main.go:141] libmachine: () Calling .GetVersion
I1018 11:39:54.715434   18607 main.go:141] libmachine: Using API Version  1
I1018 11:39:54.715466   18607 main.go:141] libmachine: () Calling .SetConfigRaw
I1018 11:39:54.715818   18607 main.go:141] libmachine: () Calling .GetMachineName
I1018 11:39:54.716023   18607 main.go:141] libmachine: (functional-897621) Calling .DriverName
I1018 11:39:54.716231   18607 ssh_runner.go:195] Run: systemctl --version
I1018 11:39:54.716260   18607 main.go:141] libmachine: (functional-897621) Calling .GetSSHHostname
I1018 11:39:54.720073   18607 main.go:141] libmachine: (functional-897621) DBG | domain functional-897621 has defined MAC address 52:54:00:80:f0:20 in network mk-functional-897621
I1018 11:39:54.720739   18607 main.go:141] libmachine: (functional-897621) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:80:f0:20", ip: ""} in network mk-functional-897621: {Iface:virbr1 ExpiryTime:2025-10-18 12:36:08 +0000 UTC Type:0 Mac:52:54:00:80:f0:20 Iaid: IPaddr:192.168.39.228 Prefix:24 Hostname:functional-897621 Clientid:01:52:54:00:80:f0:20}
I1018 11:39:54.720760   18607 main.go:141] libmachine: (functional-897621) DBG | domain functional-897621 has defined IP address 192.168.39.228 and MAC address 52:54:00:80:f0:20 in network mk-functional-897621
I1018 11:39:54.720991   18607 main.go:141] libmachine: (functional-897621) Calling .GetSSHPort
I1018 11:39:54.721211   18607 main.go:141] libmachine: (functional-897621) Calling .GetSSHKeyPath
I1018 11:39:54.721495   18607 main.go:141] libmachine: (functional-897621) Calling .GetSSHUsername
I1018 11:39:54.721683   18607 sshutil.go:53] new ssh client: &{IP:192.168.39.228 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/21647-6010/.minikube/machines/functional-897621/id_rsa Username:docker}
I1018 11:39:54.844220   18607 ssh_runner.go:195] Run: docker images --no-trunc --format "{{json .}}"
I1018 11:39:54.905436   18607 main.go:141] libmachine: Making call to close driver server
I1018 11:39:54.905453   18607 main.go:141] libmachine: (functional-897621) Calling .Close
I1018 11:39:54.905741   18607 main.go:141] libmachine: Successfully made call to close driver server
I1018 11:39:54.905765   18607 main.go:141] libmachine: Making call to close connection to plugin binary
I1018 11:39:54.905778   18607 main.go:141] libmachine: Making call to close driver server
I1018 11:39:54.905787   18607 main.go:141] libmachine: (functional-897621) Calling .Close
I1018 11:39:54.906116   18607 main.go:141] libmachine: Successfully made call to close driver server
I1018 11:39:54.906134   18607 main.go:141] libmachine: (functional-897621) DBG | Closing plugin on server side
I1018 11:39:54.906161   18607 main.go:141] libmachine: Making call to close connection to plugin binary
--- PASS: TestFunctional/parallel/ImageCommands/ImageListYaml (0.28s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageBuild (4.88s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageBuild
=== PAUSE TestFunctional/parallel/ImageCommands/ImageBuild

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageBuild
functional_test.go:323: (dbg) Run:  out/minikube-linux-amd64 -p functional-897621 ssh pgrep buildkitd
functional_test.go:323: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-897621 ssh pgrep buildkitd: exit status 1 (271.107209ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
functional_test.go:330: (dbg) Run:  out/minikube-linux-amd64 -p functional-897621 image build -t localhost/my-image:functional-897621 testdata/build --alsologtostderr
functional_test.go:330: (dbg) Done: out/minikube-linux-amd64 -p functional-897621 image build -t localhost/my-image:functional-897621 testdata/build --alsologtostderr: (4.356932906s)
functional_test.go:338: (dbg) Stderr: out/minikube-linux-amd64 -p functional-897621 image build -t localhost/my-image:functional-897621 testdata/build --alsologtostderr:
I1018 11:39:55.232028   18660 out.go:360] Setting OutFile to fd 1 ...
I1018 11:39:55.232335   18660 out.go:408] TERM=,COLORTERM=, which probably does not support color
I1018 11:39:55.232351   18660 out.go:374] Setting ErrFile to fd 2...
I1018 11:39:55.232356   18660 out.go:408] TERM=,COLORTERM=, which probably does not support color
I1018 11:39:55.232560   18660 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21647-6010/.minikube/bin
I1018 11:39:55.233223   18660 config.go:182] Loaded profile config "functional-897621": Driver=kvm2, ContainerRuntime=docker, KubernetesVersion=v1.34.1
I1018 11:39:55.233850   18660 config.go:182] Loaded profile config "functional-897621": Driver=kvm2, ContainerRuntime=docker, KubernetesVersion=v1.34.1
I1018 11:39:55.234258   18660 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_integration/out/docker-machine-driver-kvm2
I1018 11:39:55.234325   18660 main.go:141] libmachine: Launching plugin server for driver kvm2
I1018 11:39:55.248440   18660 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:38975
I1018 11:39:55.248907   18660 main.go:141] libmachine: () Calling .GetVersion
I1018 11:39:55.249606   18660 main.go:141] libmachine: Using API Version  1
I1018 11:39:55.249627   18660 main.go:141] libmachine: () Calling .SetConfigRaw
I1018 11:39:55.250035   18660 main.go:141] libmachine: () Calling .GetMachineName
I1018 11:39:55.250278   18660 main.go:141] libmachine: (functional-897621) Calling .GetState
I1018 11:39:55.252483   18660 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_integration/out/docker-machine-driver-kvm2
I1018 11:39:55.252524   18660 main.go:141] libmachine: Launching plugin server for driver kvm2
I1018 11:39:55.267160   18660 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:39461
I1018 11:39:55.267646   18660 main.go:141] libmachine: () Calling .GetVersion
I1018 11:39:55.268353   18660 main.go:141] libmachine: Using API Version  1
I1018 11:39:55.268391   18660 main.go:141] libmachine: () Calling .SetConfigRaw
I1018 11:39:55.268846   18660 main.go:141] libmachine: () Calling .GetMachineName
I1018 11:39:55.269059   18660 main.go:141] libmachine: (functional-897621) Calling .DriverName
I1018 11:39:55.269358   18660 ssh_runner.go:195] Run: systemctl --version
I1018 11:39:55.269386   18660 main.go:141] libmachine: (functional-897621) Calling .GetSSHHostname
I1018 11:39:55.273201   18660 main.go:141] libmachine: (functional-897621) DBG | domain functional-897621 has defined MAC address 52:54:00:80:f0:20 in network mk-functional-897621
I1018 11:39:55.273725   18660 main.go:141] libmachine: (functional-897621) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:80:f0:20", ip: ""} in network mk-functional-897621: {Iface:virbr1 ExpiryTime:2025-10-18 12:36:08 +0000 UTC Type:0 Mac:52:54:00:80:f0:20 Iaid: IPaddr:192.168.39.228 Prefix:24 Hostname:functional-897621 Clientid:01:52:54:00:80:f0:20}
I1018 11:39:55.273754   18660 main.go:141] libmachine: (functional-897621) DBG | domain functional-897621 has defined IP address 192.168.39.228 and MAC address 52:54:00:80:f0:20 in network mk-functional-897621
I1018 11:39:55.273944   18660 main.go:141] libmachine: (functional-897621) Calling .GetSSHPort
I1018 11:39:55.274182   18660 main.go:141] libmachine: (functional-897621) Calling .GetSSHKeyPath
I1018 11:39:55.274389   18660 main.go:141] libmachine: (functional-897621) Calling .GetSSHUsername
I1018 11:39:55.274619   18660 sshutil.go:53] new ssh client: &{IP:192.168.39.228 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/21647-6010/.minikube/machines/functional-897621/id_rsa Username:docker}
I1018 11:39:55.392519   18660 build_images.go:161] Building image from path: /tmp/build.2449295135.tar
I1018 11:39:55.392593   18660 ssh_runner.go:195] Run: sudo mkdir -p /var/lib/minikube/build
I1018 11:39:55.436389   18660 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/build/build.2449295135.tar
I1018 11:39:55.458125   18660 ssh_runner.go:352] existence check for /var/lib/minikube/build/build.2449295135.tar: stat -c "%s %y" /var/lib/minikube/build/build.2449295135.tar: Process exited with status 1
stdout:

                                                
                                                
stderr:
stat: cannot statx '/var/lib/minikube/build/build.2449295135.tar': No such file or directory
I1018 11:39:55.458157   18660 ssh_runner.go:362] scp /tmp/build.2449295135.tar --> /var/lib/minikube/build/build.2449295135.tar (3072 bytes)
I1018 11:39:55.584742   18660 ssh_runner.go:195] Run: sudo mkdir -p /var/lib/minikube/build/build.2449295135
I1018 11:39:55.617401   18660 ssh_runner.go:195] Run: sudo tar -C /var/lib/minikube/build/build.2449295135 -xf /var/lib/minikube/build/build.2449295135.tar
I1018 11:39:55.664244   18660 docker.go:361] Building image: /var/lib/minikube/build/build.2449295135
I1018 11:39:55.664388   18660 ssh_runner.go:195] Run: docker build -t localhost/my-image:functional-897621 /var/lib/minikube/build/build.2449295135
#0 building with "default" instance using docker driver

                                                
                                                
#1 [internal] load build definition from Dockerfile
#1 transferring dockerfile: 97B 0.0s done
#1 DONE 0.1s

                                                
                                                
#2 [internal] load metadata for gcr.io/k8s-minikube/busybox:latest
#2 DONE 2.1s

                                                
                                                
#3 [internal] load .dockerignore
#3 transferring context: 2B done
#3 DONE 0.0s

                                                
                                                
#4 [internal] load build context
#4 transferring context: 62B done
#4 DONE 0.0s

                                                
                                                
#5 [1/3] FROM gcr.io/k8s-minikube/busybox:latest@sha256:ca5ae90100d50772da31f3b5016209e25ad61972404e2ccd83d44f10dee7e79b
#5 resolve gcr.io/k8s-minikube/busybox:latest@sha256:ca5ae90100d50772da31f3b5016209e25ad61972404e2ccd83d44f10dee7e79b 0.0s done
#5 sha256:beae173ccac6ad749f76713cf4440fe3d21d1043fe616dfbe30775815d1d0f6a 1.46kB / 1.46kB done
#5 sha256:5cc84ad355aaa64f46ea9c7bbcc319a9d808ab15088a27209c9e70ef86e5a2aa 0B / 772.79kB 0.1s
#5 sha256:ca5ae90100d50772da31f3b5016209e25ad61972404e2ccd83d44f10dee7e79b 770B / 770B done
#5 sha256:62ffc2ed7554e4c6d360bce40bbcf196573dd27c4ce080641a2c59867e732dee 527B / 527B done
#5 extracting sha256:5cc84ad355aaa64f46ea9c7bbcc319a9d808ab15088a27209c9e70ef86e5a2aa
#5 sha256:5cc84ad355aaa64f46ea9c7bbcc319a9d808ab15088a27209c9e70ef86e5a2aa 772.79kB / 772.79kB 0.3s done
#5 extracting sha256:5cc84ad355aaa64f46ea9c7bbcc319a9d808ab15088a27209c9e70ef86e5a2aa 0.1s done
#5 DONE 0.5s

                                                
                                                
#6 [2/3] RUN true
#6 DONE 0.3s

                                                
                                                
#7 [3/3] ADD content.txt /
#7 DONE 0.1s

                                                
                                                
#8 exporting to image
#8 exporting layers 0.1s done
#8 writing image sha256:ae8196c40753f1edb14fcf18acc922f61ff510a5c3c4651404e4f7763c27de35 done
#8 naming to localhost/my-image:functional-897621 done
#8 DONE 0.1s
I1018 11:39:59.486495   18660 ssh_runner.go:235] Completed: docker build -t localhost/my-image:functional-897621 /var/lib/minikube/build/build.2449295135: (3.822071188s)
I1018 11:39:59.486573   18660 ssh_runner.go:195] Run: sudo rm -rf /var/lib/minikube/build/build.2449295135
I1018 11:39:59.515092   18660 ssh_runner.go:195] Run: sudo rm -f /var/lib/minikube/build/build.2449295135.tar
I1018 11:39:59.533625   18660 build_images.go:217] Built localhost/my-image:functional-897621 from /tmp/build.2449295135.tar
I1018 11:39:59.533677   18660 build_images.go:133] succeeded building to: functional-897621
I1018 11:39:59.533685   18660 build_images.go:134] failed building to: 
I1018 11:39:59.533748   18660 main.go:141] libmachine: Making call to close driver server
I1018 11:39:59.533770   18660 main.go:141] libmachine: (functional-897621) Calling .Close
I1018 11:39:59.534123   18660 main.go:141] libmachine: Successfully made call to close driver server
I1018 11:39:59.534145   18660 main.go:141] libmachine: Making call to close connection to plugin binary
I1018 11:39:59.534155   18660 main.go:141] libmachine: Making call to close driver server
I1018 11:39:59.534163   18660 main.go:141] libmachine: (functional-897621) Calling .Close
I1018 11:39:59.534459   18660 main.go:141] libmachine: Successfully made call to close driver server
I1018 11:39:59.534480   18660 main.go:141] libmachine: Making call to close connection to plugin binary
functional_test.go:466: (dbg) Run:  out/minikube-linux-amd64 -p functional-897621 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageBuild (4.88s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/Setup (1.54s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/Setup
functional_test.go:357: (dbg) Run:  docker pull kicbase/echo-server:1.0
functional_test.go:357: (dbg) Done: docker pull kicbase/echo-server:1.0: (1.511662443s)
functional_test.go:362: (dbg) Run:  docker tag kicbase/echo-server:1.0 kicbase/echo-server:functional-897621
--- PASS: TestFunctional/parallel/ImageCommands/Setup (1.54s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/JSONOutput (0.27s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/JSONOutput
functional_test.go:1499: (dbg) Run:  out/minikube-linux-amd64 -p functional-897621 service list -o json
functional_test.go:1504: Took "267.520583ms" to run "out/minikube-linux-amd64 -p functional-897621 service list -o json"
--- PASS: TestFunctional/parallel/ServiceCmd/JSONOutput (0.27s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/HTTPS (0.3s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/HTTPS
functional_test.go:1519: (dbg) Run:  out/minikube-linux-amd64 -p functional-897621 service --namespace=default --https --url hello-node
functional_test.go:1532: found endpoint: https://192.168.39.228:30955
--- PASS: TestFunctional/parallel/ServiceCmd/HTTPS (0.30s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/Format (0.32s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/Format
functional_test.go:1550: (dbg) Run:  out/minikube-linux-amd64 -p functional-897621 service hello-node --url --format={{.IP}}
--- PASS: TestFunctional/parallel/ServiceCmd/Format (0.32s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/URL (0.41s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/URL
functional_test.go:1569: (dbg) Run:  out/minikube-linux-amd64 -p functional-897621 service hello-node --url
functional_test.go:1575: found endpoint for hello-node: http://192.168.39.228:30955
--- PASS: TestFunctional/parallel/ServiceCmd/URL (0.41s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageLoadDaemon (1.26s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageLoadDaemon
functional_test.go:370: (dbg) Run:  out/minikube-linux-amd64 -p functional-897621 image load --daemon kicbase/echo-server:functional-897621 --alsologtostderr
functional_test.go:370: (dbg) Done: out/minikube-linux-amd64 -p functional-897621 image load --daemon kicbase/echo-server:functional-897621 --alsologtostderr: (1.018641875s)
functional_test.go:466: (dbg) Run:  out/minikube-linux-amd64 -p functional-897621 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageLoadDaemon (1.26s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageReloadDaemon (0.93s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageReloadDaemon
functional_test.go:380: (dbg) Run:  out/minikube-linux-amd64 -p functional-897621 image load --daemon kicbase/echo-server:functional-897621 --alsologtostderr
functional_test.go:466: (dbg) Run:  out/minikube-linux-amd64 -p functional-897621 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageReloadDaemon (0.93s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageTagAndLoadDaemon (1.55s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageTagAndLoadDaemon
functional_test.go:250: (dbg) Run:  docker pull kicbase/echo-server:latest
functional_test.go:255: (dbg) Run:  docker tag kicbase/echo-server:latest kicbase/echo-server:functional-897621
functional_test.go:260: (dbg) Run:  out/minikube-linux-amd64 -p functional-897621 image load --daemon kicbase/echo-server:functional-897621 --alsologtostderr
functional_test.go:466: (dbg) Run:  out/minikube-linux-amd64 -p functional-897621 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageTagAndLoadDaemon (1.55s)

                                                
                                    
x
+
TestFunctional/parallel/MountCmd/specific-port (2.02s)

                                                
                                                
=== RUN   TestFunctional/parallel/MountCmd/specific-port
functional_test_mount_test.go:213: (dbg) daemon: [out/minikube-linux-amd64 mount -p functional-897621 /tmp/TestFunctionalparallelMountCmdspecific-port2207593900/001:/mount-9p --alsologtostderr -v=1 --port 46464]
functional_test_mount_test.go:243: (dbg) Run:  out/minikube-linux-amd64 -p functional-897621 ssh "findmnt -T /mount-9p | grep 9p"
functional_test_mount_test.go:243: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-897621 ssh "findmnt -T /mount-9p | grep 9p": exit status 1 (239.737769ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
I1018 11:39:50.360353    9909 retry.go:31] will retry after 630.929576ms: exit status 1
functional_test_mount_test.go:243: (dbg) Run:  out/minikube-linux-amd64 -p functional-897621 ssh "findmnt -T /mount-9p | grep 9p"
functional_test_mount_test.go:257: (dbg) Run:  out/minikube-linux-amd64 -p functional-897621 ssh -- ls -la /mount-9p
functional_test_mount_test.go:261: guest mount directory contents
total 0
functional_test_mount_test.go:263: (dbg) stopping [out/minikube-linux-amd64 mount -p functional-897621 /tmp/TestFunctionalparallelMountCmdspecific-port2207593900/001:/mount-9p --alsologtostderr -v=1 --port 46464] ...
functional_test_mount_test.go:264: reading mount text
functional_test_mount_test.go:278: done reading mount text
functional_test_mount_test.go:230: (dbg) Run:  out/minikube-linux-amd64 -p functional-897621 ssh "sudo umount -f /mount-9p"
functional_test_mount_test.go:230: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-897621 ssh "sudo umount -f /mount-9p": exit status 1 (235.524325ms)

                                                
                                                
-- stdout --
	umount: /mount-9p: not mounted.

                                                
                                                
-- /stdout --
** stderr ** 
	ssh: Process exited with status 32

                                                
                                                
** /stderr **
functional_test_mount_test.go:232: "out/minikube-linux-amd64 -p functional-897621 ssh \"sudo umount -f /mount-9p\"": exit status 1
functional_test_mount_test.go:234: (dbg) stopping [out/minikube-linux-amd64 mount -p functional-897621 /tmp/TestFunctionalparallelMountCmdspecific-port2207593900/001:/mount-9p --alsologtostderr -v=1 --port 46464] ...
--- PASS: TestFunctional/parallel/MountCmd/specific-port (2.02s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageSaveToFile (0.43s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageSaveToFile
functional_test.go:395: (dbg) Run:  out/minikube-linux-amd64 -p functional-897621 image save kicbase/echo-server:functional-897621 /home/jenkins/workspace/KVM_Linux_integration/echo-server-save.tar --alsologtostderr
--- PASS: TestFunctional/parallel/ImageCommands/ImageSaveToFile (0.43s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageRemove (0.48s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageRemove
functional_test.go:407: (dbg) Run:  out/minikube-linux-amd64 -p functional-897621 image rm kicbase/echo-server:functional-897621 --alsologtostderr
functional_test.go:466: (dbg) Run:  out/minikube-linux-amd64 -p functional-897621 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageRemove (0.48s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageLoadFromFile (0.92s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageLoadFromFile
functional_test.go:424: (dbg) Run:  out/minikube-linux-amd64 -p functional-897621 image load /home/jenkins/workspace/KVM_Linux_integration/echo-server-save.tar --alsologtostderr
functional_test.go:466: (dbg) Run:  out/minikube-linux-amd64 -p functional-897621 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageLoadFromFile (0.92s)

                                                
                                    
x
+
TestFunctional/parallel/MountCmd/VerifyCleanup (1.37s)

                                                
                                                
=== RUN   TestFunctional/parallel/MountCmd/VerifyCleanup
functional_test_mount_test.go:298: (dbg) daemon: [out/minikube-linux-amd64 mount -p functional-897621 /tmp/TestFunctionalparallelMountCmdVerifyCleanup1129166630/001:/mount1 --alsologtostderr -v=1]
functional_test_mount_test.go:298: (dbg) daemon: [out/minikube-linux-amd64 mount -p functional-897621 /tmp/TestFunctionalparallelMountCmdVerifyCleanup1129166630/001:/mount2 --alsologtostderr -v=1]
functional_test_mount_test.go:298: (dbg) daemon: [out/minikube-linux-amd64 mount -p functional-897621 /tmp/TestFunctionalparallelMountCmdVerifyCleanup1129166630/001:/mount3 --alsologtostderr -v=1]
functional_test_mount_test.go:325: (dbg) Run:  out/minikube-linux-amd64 -p functional-897621 ssh "findmnt -T" /mount1
functional_test_mount_test.go:325: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-897621 ssh "findmnt -T" /mount1: exit status 1 (333.585132ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
I1018 11:39:52.473046    9909 retry.go:31] will retry after 263.46119ms: exit status 1
functional_test_mount_test.go:325: (dbg) Run:  out/minikube-linux-amd64 -p functional-897621 ssh "findmnt -T" /mount1
functional_test_mount_test.go:325: (dbg) Run:  out/minikube-linux-amd64 -p functional-897621 ssh "findmnt -T" /mount2
functional_test_mount_test.go:325: (dbg) Run:  out/minikube-linux-amd64 -p functional-897621 ssh "findmnt -T" /mount3
functional_test_mount_test.go:370: (dbg) Run:  out/minikube-linux-amd64 mount -p functional-897621 --kill=true
functional_test_mount_test.go:313: (dbg) stopping [out/minikube-linux-amd64 mount -p functional-897621 /tmp/TestFunctionalparallelMountCmdVerifyCleanup1129166630/001:/mount1 --alsologtostderr -v=1] ...
helpers_test.go:507: unable to find parent, assuming dead: process does not exist
functional_test_mount_test.go:313: (dbg) stopping [out/minikube-linux-amd64 mount -p functional-897621 /tmp/TestFunctionalparallelMountCmdVerifyCleanup1129166630/001:/mount2 --alsologtostderr -v=1] ...
helpers_test.go:507: unable to find parent, assuming dead: process does not exist
functional_test_mount_test.go:313: (dbg) stopping [out/minikube-linux-amd64 mount -p functional-897621 /tmp/TestFunctionalparallelMountCmdVerifyCleanup1129166630/001:/mount3 --alsologtostderr -v=1] ...
helpers_test.go:507: unable to find parent, assuming dead: process does not exist
--- PASS: TestFunctional/parallel/MountCmd/VerifyCleanup (1.37s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageSaveDaemon (0.47s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageSaveDaemon
functional_test.go:434: (dbg) Run:  docker rmi kicbase/echo-server:functional-897621
functional_test.go:439: (dbg) Run:  out/minikube-linux-amd64 -p functional-897621 image save --daemon kicbase/echo-server:functional-897621 --alsologtostderr
functional_test.go:447: (dbg) Run:  docker image inspect kicbase/echo-server:functional-897621
--- PASS: TestFunctional/parallel/ImageCommands/ImageSaveDaemon (0.47s)

                                                
                                    
x
+
TestFunctional/delete_echo-server_images (0.04s)

                                                
                                                
=== RUN   TestFunctional/delete_echo-server_images
functional_test.go:205: (dbg) Run:  docker rmi -f kicbase/echo-server:1.0
functional_test.go:205: (dbg) Run:  docker rmi -f kicbase/echo-server:functional-897621
--- PASS: TestFunctional/delete_echo-server_images (0.04s)

                                                
                                    
x
+
TestFunctional/delete_my-image_image (0.02s)

                                                
                                                
=== RUN   TestFunctional/delete_my-image_image
functional_test.go:213: (dbg) Run:  docker rmi -f localhost/my-image:functional-897621
--- PASS: TestFunctional/delete_my-image_image (0.02s)

                                                
                                    
x
+
TestFunctional/delete_minikube_cached_images (0.02s)

                                                
                                                
=== RUN   TestFunctional/delete_minikube_cached_images
functional_test.go:221: (dbg) Run:  docker rmi -f minikube-local-cache-test:functional-897621
--- PASS: TestFunctional/delete_minikube_cached_images (0.02s)

                                                
                                    
x
+
TestGvisorAddon (217.5s)

                                                
                                                
=== RUN   TestGvisorAddon
=== PAUSE TestGvisorAddon

                                                
                                                

                                                
                                                
=== CONT  TestGvisorAddon
gvisor_addon_test.go:52: (dbg) Run:  out/minikube-linux-amd64 start -p gvisor-073301 --memory=3072 --container-runtime=containerd --docker-opt containerd=/var/run/containerd/containerd.sock --driver=kvm2  --auto-update-drivers=false
gvisor_addon_test.go:52: (dbg) Done: out/minikube-linux-amd64 start -p gvisor-073301 --memory=3072 --container-runtime=containerd --docker-opt containerd=/var/run/containerd/containerd.sock --driver=kvm2  --auto-update-drivers=false: (1m21.535807379s)
gvisor_addon_test.go:58: (dbg) Run:  out/minikube-linux-amd64 -p gvisor-073301 cache add gcr.io/k8s-minikube/gvisor-addon:2
gvisor_addon_test.go:58: (dbg) Done: out/minikube-linux-amd64 -p gvisor-073301 cache add gcr.io/k8s-minikube/gvisor-addon:2: (3.704721084s)
gvisor_addon_test.go:63: (dbg) Run:  out/minikube-linux-amd64 -p gvisor-073301 addons enable gvisor
gvisor_addon_test.go:63: (dbg) Done: out/minikube-linux-amd64 -p gvisor-073301 addons enable gvisor: (6.365617432s)
gvisor_addon_test.go:68: (dbg) TestGvisorAddon: waiting 4m0s for pods matching "kubernetes.io/minikube-addons=gvisor" in namespace "kube-system" ...
helpers_test.go:352: "gvisor" [ba247fec-47c1-43db-b687-5202777ed032] Running
gvisor_addon_test.go:68: (dbg) TestGvisorAddon: kubernetes.io/minikube-addons=gvisor healthy within 6.004912466s
gvisor_addon_test.go:73: (dbg) Run:  kubectl --context gvisor-073301 replace --force -f testdata/nginx-gvisor.yaml
gvisor_addon_test.go:78: (dbg) TestGvisorAddon: waiting 4m0s for pods matching "run=nginx,runtime=gvisor" in namespace "default" ...
helpers_test.go:352: "nginx-gvisor" [352cd615-d0b1-43dc-a66a-a183b2db3054] Pending
helpers_test.go:352: "nginx-gvisor" [352cd615-d0b1-43dc-a66a-a183b2db3054] Pending / Ready:ContainersNotReady (containers with unready status: [nginx]) / ContainersReady:ContainersNotReady (containers with unready status: [nginx])
helpers_test.go:352: "nginx-gvisor" [352cd615-d0b1-43dc-a66a-a183b2db3054] Running
gvisor_addon_test.go:78: (dbg) TestGvisorAddon: run=nginx,runtime=gvisor healthy within 56.006345631s
gvisor_addon_test.go:83: (dbg) Run:  out/minikube-linux-amd64 stop -p gvisor-073301
E1018 12:17:38.916938    9909 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21647-6010/.minikube/profiles/functional-897621/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
gvisor_addon_test.go:83: (dbg) Done: out/minikube-linux-amd64 stop -p gvisor-073301: (7.238148876s)
gvisor_addon_test.go:88: (dbg) Run:  out/minikube-linux-amd64 start -p gvisor-073301 --memory=3072 --container-runtime=containerd --docker-opt containerd=/var/run/containerd/containerd.sock --driver=kvm2  --auto-update-drivers=false
gvisor_addon_test.go:88: (dbg) Done: out/minikube-linux-amd64 start -p gvisor-073301 --memory=3072 --container-runtime=containerd --docker-opt containerd=/var/run/containerd/containerd.sock --driver=kvm2  --auto-update-drivers=false: (44.481676613s)
gvisor_addon_test.go:92: (dbg) TestGvisorAddon: waiting 4m0s for pods matching "kubernetes.io/minikube-addons=gvisor" in namespace "kube-system" ...
helpers_test.go:352: "gvisor" [ba247fec-47c1-43db-b687-5202777ed032] Running
gvisor_addon_test.go:92: (dbg) TestGvisorAddon: kubernetes.io/minikube-addons=gvisor healthy within 6.005233322s
gvisor_addon_test.go:95: (dbg) TestGvisorAddon: waiting 4m0s for pods matching "run=nginx,runtime=gvisor" in namespace "default" ...
helpers_test.go:352: "nginx-gvisor" [352cd615-d0b1-43dc-a66a-a183b2db3054] Running / Ready:ContainersNotReady (containers with unready status: [nginx]) / ContainersReady:ContainersNotReady (containers with unready status: [nginx])
gvisor_addon_test.go:95: (dbg) TestGvisorAddon: run=nginx,runtime=gvisor healthy within 5.010280681s
helpers_test.go:175: Cleaning up "gvisor-073301" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p gvisor-073301
--- PASS: TestGvisorAddon (217.50s)

                                                
                                    
x
+
TestMultiControlPlane/serial/StartCluster (224.1s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/StartCluster
ha_test.go:101: (dbg) Run:  out/minikube-linux-amd64 -p ha-350733 start --ha --memory 3072 --wait true --alsologtostderr -v 5 --driver=kvm2  --auto-update-drivers=false
E1018 11:40:29.731796    9909 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21647-6010/.minikube/profiles/addons-886198/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1018 11:42:45.860991    9909 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21647-6010/.minikube/profiles/addons-886198/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1018 11:43:13.577611    9909 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21647-6010/.minikube/profiles/addons-886198/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
ha_test.go:101: (dbg) Done: out/minikube-linux-amd64 -p ha-350733 start --ha --memory 3072 --wait true --alsologtostderr -v 5 --driver=kvm2  --auto-update-drivers=false: (3m43.362267737s)
ha_test.go:107: (dbg) Run:  out/minikube-linux-amd64 -p ha-350733 status --alsologtostderr -v 5
--- PASS: TestMultiControlPlane/serial/StartCluster (224.10s)

                                                
                                    
x
+
TestMultiControlPlane/serial/DeployApp (6.62s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/DeployApp
ha_test.go:128: (dbg) Run:  out/minikube-linux-amd64 -p ha-350733 kubectl -- apply -f ./testdata/ha/ha-pod-dns-test.yaml
ha_test.go:133: (dbg) Run:  out/minikube-linux-amd64 -p ha-350733 kubectl -- rollout status deployment/busybox
ha_test.go:133: (dbg) Done: out/minikube-linux-amd64 -p ha-350733 kubectl -- rollout status deployment/busybox: (4.212975512s)
ha_test.go:140: (dbg) Run:  out/minikube-linux-amd64 -p ha-350733 kubectl -- get pods -o jsonpath='{.items[*].status.podIP}'
ha_test.go:163: (dbg) Run:  out/minikube-linux-amd64 -p ha-350733 kubectl -- get pods -o jsonpath='{.items[*].metadata.name}'
ha_test.go:171: (dbg) Run:  out/minikube-linux-amd64 -p ha-350733 kubectl -- exec busybox-7b57f96db7-hbtvn -- nslookup kubernetes.io
ha_test.go:171: (dbg) Run:  out/minikube-linux-amd64 -p ha-350733 kubectl -- exec busybox-7b57f96db7-t6wvd -- nslookup kubernetes.io
ha_test.go:171: (dbg) Run:  out/minikube-linux-amd64 -p ha-350733 kubectl -- exec busybox-7b57f96db7-x9zhq -- nslookup kubernetes.io
ha_test.go:181: (dbg) Run:  out/minikube-linux-amd64 -p ha-350733 kubectl -- exec busybox-7b57f96db7-hbtvn -- nslookup kubernetes.default
ha_test.go:181: (dbg) Run:  out/minikube-linux-amd64 -p ha-350733 kubectl -- exec busybox-7b57f96db7-t6wvd -- nslookup kubernetes.default
ha_test.go:181: (dbg) Run:  out/minikube-linux-amd64 -p ha-350733 kubectl -- exec busybox-7b57f96db7-x9zhq -- nslookup kubernetes.default
ha_test.go:189: (dbg) Run:  out/minikube-linux-amd64 -p ha-350733 kubectl -- exec busybox-7b57f96db7-hbtvn -- nslookup kubernetes.default.svc.cluster.local
ha_test.go:189: (dbg) Run:  out/minikube-linux-amd64 -p ha-350733 kubectl -- exec busybox-7b57f96db7-t6wvd -- nslookup kubernetes.default.svc.cluster.local
ha_test.go:189: (dbg) Run:  out/minikube-linux-amd64 -p ha-350733 kubectl -- exec busybox-7b57f96db7-x9zhq -- nslookup kubernetes.default.svc.cluster.local
--- PASS: TestMultiControlPlane/serial/DeployApp (6.62s)

                                                
                                    
x
+
TestMultiControlPlane/serial/PingHostFromPods (1.38s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/PingHostFromPods
ha_test.go:199: (dbg) Run:  out/minikube-linux-amd64 -p ha-350733 kubectl -- get pods -o jsonpath='{.items[*].metadata.name}'
ha_test.go:207: (dbg) Run:  out/minikube-linux-amd64 -p ha-350733 kubectl -- exec busybox-7b57f96db7-hbtvn -- sh -c "nslookup host.minikube.internal | awk 'NR==5' | cut -d' ' -f3"
ha_test.go:218: (dbg) Run:  out/minikube-linux-amd64 -p ha-350733 kubectl -- exec busybox-7b57f96db7-hbtvn -- sh -c "ping -c 1 192.168.39.1"
ha_test.go:207: (dbg) Run:  out/minikube-linux-amd64 -p ha-350733 kubectl -- exec busybox-7b57f96db7-t6wvd -- sh -c "nslookup host.minikube.internal | awk 'NR==5' | cut -d' ' -f3"
ha_test.go:218: (dbg) Run:  out/minikube-linux-amd64 -p ha-350733 kubectl -- exec busybox-7b57f96db7-t6wvd -- sh -c "ping -c 1 192.168.39.1"
ha_test.go:207: (dbg) Run:  out/minikube-linux-amd64 -p ha-350733 kubectl -- exec busybox-7b57f96db7-x9zhq -- sh -c "nslookup host.minikube.internal | awk 'NR==5' | cut -d' ' -f3"
ha_test.go:218: (dbg) Run:  out/minikube-linux-amd64 -p ha-350733 kubectl -- exec busybox-7b57f96db7-x9zhq -- sh -c "ping -c 1 192.168.39.1"
--- PASS: TestMultiControlPlane/serial/PingHostFromPods (1.38s)

                                                
                                    
x
+
TestMultiControlPlane/serial/AddWorkerNode (57.09s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/AddWorkerNode
ha_test.go:228: (dbg) Run:  out/minikube-linux-amd64 -p ha-350733 node add --alsologtostderr -v 5
E1018 11:44:35.846050    9909 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21647-6010/.minikube/profiles/functional-897621/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1018 11:44:35.852518    9909 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21647-6010/.minikube/profiles/functional-897621/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1018 11:44:35.864127    9909 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21647-6010/.minikube/profiles/functional-897621/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1018 11:44:35.885612    9909 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21647-6010/.minikube/profiles/functional-897621/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1018 11:44:35.927077    9909 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21647-6010/.minikube/profiles/functional-897621/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1018 11:44:36.008642    9909 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21647-6010/.minikube/profiles/functional-897621/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1018 11:44:36.170226    9909 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21647-6010/.minikube/profiles/functional-897621/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1018 11:44:36.491908    9909 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21647-6010/.minikube/profiles/functional-897621/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1018 11:44:37.134075    9909 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21647-6010/.minikube/profiles/functional-897621/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1018 11:44:38.415444    9909 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21647-6010/.minikube/profiles/functional-897621/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1018 11:44:40.977723    9909 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21647-6010/.minikube/profiles/functional-897621/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1018 11:44:46.099539    9909 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21647-6010/.minikube/profiles/functional-897621/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1018 11:44:56.340951    9909 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21647-6010/.minikube/profiles/functional-897621/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1018 11:45:16.823142    9909 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21647-6010/.minikube/profiles/functional-897621/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
ha_test.go:228: (dbg) Done: out/minikube-linux-amd64 -p ha-350733 node add --alsologtostderr -v 5: (56.157489113s)
ha_test.go:234: (dbg) Run:  out/minikube-linux-amd64 -p ha-350733 status --alsologtostderr -v 5
--- PASS: TestMultiControlPlane/serial/AddWorkerNode (57.09s)

                                                
                                    
x
+
TestMultiControlPlane/serial/NodeLabels (0.08s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/NodeLabels
ha_test.go:255: (dbg) Run:  kubectl --context ha-350733 get nodes -o "jsonpath=[{range .items[*]}{.metadata.labels},{end}]"
--- PASS: TestMultiControlPlane/serial/NodeLabels (0.08s)

                                                
                                    
x
+
TestMultiControlPlane/serial/HAppyAfterClusterStart (0.93s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/HAppyAfterClusterStart
ha_test.go:281: (dbg) Run:  out/minikube-linux-amd64 profile list --output json
--- PASS: TestMultiControlPlane/serial/HAppyAfterClusterStart (0.93s)

                                                
                                    
x
+
TestMultiControlPlane/serial/CopyFile (13.77s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/CopyFile
ha_test.go:328: (dbg) Run:  out/minikube-linux-amd64 -p ha-350733 status --output json --alsologtostderr -v 5
helpers_test.go:573: (dbg) Run:  out/minikube-linux-amd64 -p ha-350733 cp testdata/cp-test.txt ha-350733:/home/docker/cp-test.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p ha-350733 ssh -n ha-350733 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-amd64 -p ha-350733 cp ha-350733:/home/docker/cp-test.txt /tmp/TestMultiControlPlaneserialCopyFile1702498821/001/cp-test_ha-350733.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p ha-350733 ssh -n ha-350733 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-amd64 -p ha-350733 cp ha-350733:/home/docker/cp-test.txt ha-350733-m02:/home/docker/cp-test_ha-350733_ha-350733-m02.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p ha-350733 ssh -n ha-350733 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p ha-350733 ssh -n ha-350733-m02 "sudo cat /home/docker/cp-test_ha-350733_ha-350733-m02.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-amd64 -p ha-350733 cp ha-350733:/home/docker/cp-test.txt ha-350733-m03:/home/docker/cp-test_ha-350733_ha-350733-m03.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p ha-350733 ssh -n ha-350733 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p ha-350733 ssh -n ha-350733-m03 "sudo cat /home/docker/cp-test_ha-350733_ha-350733-m03.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-amd64 -p ha-350733 cp ha-350733:/home/docker/cp-test.txt ha-350733-m04:/home/docker/cp-test_ha-350733_ha-350733-m04.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p ha-350733 ssh -n ha-350733 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p ha-350733 ssh -n ha-350733-m04 "sudo cat /home/docker/cp-test_ha-350733_ha-350733-m04.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-amd64 -p ha-350733 cp testdata/cp-test.txt ha-350733-m02:/home/docker/cp-test.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p ha-350733 ssh -n ha-350733-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-amd64 -p ha-350733 cp ha-350733-m02:/home/docker/cp-test.txt /tmp/TestMultiControlPlaneserialCopyFile1702498821/001/cp-test_ha-350733-m02.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p ha-350733 ssh -n ha-350733-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-amd64 -p ha-350733 cp ha-350733-m02:/home/docker/cp-test.txt ha-350733:/home/docker/cp-test_ha-350733-m02_ha-350733.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p ha-350733 ssh -n ha-350733-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p ha-350733 ssh -n ha-350733 "sudo cat /home/docker/cp-test_ha-350733-m02_ha-350733.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-amd64 -p ha-350733 cp ha-350733-m02:/home/docker/cp-test.txt ha-350733-m03:/home/docker/cp-test_ha-350733-m02_ha-350733-m03.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p ha-350733 ssh -n ha-350733-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p ha-350733 ssh -n ha-350733-m03 "sudo cat /home/docker/cp-test_ha-350733-m02_ha-350733-m03.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-amd64 -p ha-350733 cp ha-350733-m02:/home/docker/cp-test.txt ha-350733-m04:/home/docker/cp-test_ha-350733-m02_ha-350733-m04.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p ha-350733 ssh -n ha-350733-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p ha-350733 ssh -n ha-350733-m04 "sudo cat /home/docker/cp-test_ha-350733-m02_ha-350733-m04.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-amd64 -p ha-350733 cp testdata/cp-test.txt ha-350733-m03:/home/docker/cp-test.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p ha-350733 ssh -n ha-350733-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-amd64 -p ha-350733 cp ha-350733-m03:/home/docker/cp-test.txt /tmp/TestMultiControlPlaneserialCopyFile1702498821/001/cp-test_ha-350733-m03.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p ha-350733 ssh -n ha-350733-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-amd64 -p ha-350733 cp ha-350733-m03:/home/docker/cp-test.txt ha-350733:/home/docker/cp-test_ha-350733-m03_ha-350733.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p ha-350733 ssh -n ha-350733-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p ha-350733 ssh -n ha-350733 "sudo cat /home/docker/cp-test_ha-350733-m03_ha-350733.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-amd64 -p ha-350733 cp ha-350733-m03:/home/docker/cp-test.txt ha-350733-m02:/home/docker/cp-test_ha-350733-m03_ha-350733-m02.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p ha-350733 ssh -n ha-350733-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p ha-350733 ssh -n ha-350733-m02 "sudo cat /home/docker/cp-test_ha-350733-m03_ha-350733-m02.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-amd64 -p ha-350733 cp ha-350733-m03:/home/docker/cp-test.txt ha-350733-m04:/home/docker/cp-test_ha-350733-m03_ha-350733-m04.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p ha-350733 ssh -n ha-350733-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p ha-350733 ssh -n ha-350733-m04 "sudo cat /home/docker/cp-test_ha-350733-m03_ha-350733-m04.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-amd64 -p ha-350733 cp testdata/cp-test.txt ha-350733-m04:/home/docker/cp-test.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p ha-350733 ssh -n ha-350733-m04 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-amd64 -p ha-350733 cp ha-350733-m04:/home/docker/cp-test.txt /tmp/TestMultiControlPlaneserialCopyFile1702498821/001/cp-test_ha-350733-m04.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p ha-350733 ssh -n ha-350733-m04 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-amd64 -p ha-350733 cp ha-350733-m04:/home/docker/cp-test.txt ha-350733:/home/docker/cp-test_ha-350733-m04_ha-350733.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p ha-350733 ssh -n ha-350733-m04 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p ha-350733 ssh -n ha-350733 "sudo cat /home/docker/cp-test_ha-350733-m04_ha-350733.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-amd64 -p ha-350733 cp ha-350733-m04:/home/docker/cp-test.txt ha-350733-m02:/home/docker/cp-test_ha-350733-m04_ha-350733-m02.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p ha-350733 ssh -n ha-350733-m04 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p ha-350733 ssh -n ha-350733-m02 "sudo cat /home/docker/cp-test_ha-350733-m04_ha-350733-m02.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-amd64 -p ha-350733 cp ha-350733-m04:/home/docker/cp-test.txt ha-350733-m03:/home/docker/cp-test_ha-350733-m04_ha-350733-m03.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p ha-350733 ssh -n ha-350733-m04 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p ha-350733 ssh -n ha-350733-m03 "sudo cat /home/docker/cp-test_ha-350733-m04_ha-350733-m03.txt"
--- PASS: TestMultiControlPlane/serial/CopyFile (13.77s)

                                                
                                    
x
+
TestMultiControlPlane/serial/StopSecondaryNode (13.94s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/StopSecondaryNode
ha_test.go:365: (dbg) Run:  out/minikube-linux-amd64 -p ha-350733 node stop m02 --alsologtostderr -v 5
ha_test.go:365: (dbg) Done: out/minikube-linux-amd64 -p ha-350733 node stop m02 --alsologtostderr -v 5: (13.241295853s)
ha_test.go:371: (dbg) Run:  out/minikube-linux-amd64 -p ha-350733 status --alsologtostderr -v 5
ha_test.go:371: (dbg) Non-zero exit: out/minikube-linux-amd64 -p ha-350733 status --alsologtostderr -v 5: exit status 7 (694.499659ms)

                                                
                                                
-- stdout --
	ha-350733
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Running
	kubeconfig: Configured
	
	ha-350733-m02
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	
	ha-350733-m03
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Running
	kubeconfig: Configured
	
	ha-350733-m04
	type: Worker
	host: Running
	kubelet: Running
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1018 11:45:46.183399   23351 out.go:360] Setting OutFile to fd 1 ...
	I1018 11:45:46.183521   23351 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1018 11:45:46.183530   23351 out.go:374] Setting ErrFile to fd 2...
	I1018 11:45:46.183534   23351 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1018 11:45:46.183750   23351 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21647-6010/.minikube/bin
	I1018 11:45:46.183940   23351 out.go:368] Setting JSON to false
	I1018 11:45:46.183974   23351 mustload.go:65] Loading cluster: ha-350733
	I1018 11:45:46.184031   23351 notify.go:220] Checking for updates...
	I1018 11:45:46.184429   23351 config.go:182] Loaded profile config "ha-350733": Driver=kvm2, ContainerRuntime=docker, KubernetesVersion=v1.34.1
	I1018 11:45:46.184447   23351 status.go:174] checking status of ha-350733 ...
	I1018 11:45:46.184879   23351 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_integration/out/docker-machine-driver-kvm2
	I1018 11:45:46.184926   23351 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1018 11:45:46.203899   23351 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:45157
	I1018 11:45:46.204483   23351 main.go:141] libmachine: () Calling .GetVersion
	I1018 11:45:46.205101   23351 main.go:141] libmachine: Using API Version  1
	I1018 11:45:46.205122   23351 main.go:141] libmachine: () Calling .SetConfigRaw
	I1018 11:45:46.205660   23351 main.go:141] libmachine: () Calling .GetMachineName
	I1018 11:45:46.205932   23351 main.go:141] libmachine: (ha-350733) Calling .GetState
	I1018 11:45:46.208447   23351 status.go:371] ha-350733 host status = "Running" (err=<nil>)
	I1018 11:45:46.208463   23351 host.go:66] Checking if "ha-350733" exists ...
	I1018 11:45:46.208753   23351 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_integration/out/docker-machine-driver-kvm2
	I1018 11:45:46.208818   23351 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1018 11:45:46.222920   23351 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:39683
	I1018 11:45:46.223389   23351 main.go:141] libmachine: () Calling .GetVersion
	I1018 11:45:46.223875   23351 main.go:141] libmachine: Using API Version  1
	I1018 11:45:46.223900   23351 main.go:141] libmachine: () Calling .SetConfigRaw
	I1018 11:45:46.224348   23351 main.go:141] libmachine: () Calling .GetMachineName
	I1018 11:45:46.224597   23351 main.go:141] libmachine: (ha-350733) Calling .GetIP
	I1018 11:45:46.228176   23351 main.go:141] libmachine: (ha-350733) DBG | domain ha-350733 has defined MAC address 52:54:00:cb:10:cf in network mk-ha-350733
	I1018 11:45:46.228741   23351 main.go:141] libmachine: (ha-350733) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:cb:10:cf", ip: ""} in network mk-ha-350733: {Iface:virbr1 ExpiryTime:2025-10-18 12:40:44 +0000 UTC Type:0 Mac:52:54:00:cb:10:cf Iaid: IPaddr:192.168.39.158 Prefix:24 Hostname:ha-350733 Clientid:01:52:54:00:cb:10:cf}
	I1018 11:45:46.228767   23351 main.go:141] libmachine: (ha-350733) DBG | domain ha-350733 has defined IP address 192.168.39.158 and MAC address 52:54:00:cb:10:cf in network mk-ha-350733
	I1018 11:45:46.229071   23351 host.go:66] Checking if "ha-350733" exists ...
	I1018 11:45:46.229402   23351 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_integration/out/docker-machine-driver-kvm2
	I1018 11:45:46.229441   23351 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1018 11:45:46.243709   23351 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:45487
	I1018 11:45:46.244273   23351 main.go:141] libmachine: () Calling .GetVersion
	I1018 11:45:46.244843   23351 main.go:141] libmachine: Using API Version  1
	I1018 11:45:46.244873   23351 main.go:141] libmachine: () Calling .SetConfigRaw
	I1018 11:45:46.245236   23351 main.go:141] libmachine: () Calling .GetMachineName
	I1018 11:45:46.245451   23351 main.go:141] libmachine: (ha-350733) Calling .DriverName
	I1018 11:45:46.245714   23351 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1018 11:45:46.245749   23351 main.go:141] libmachine: (ha-350733) Calling .GetSSHHostname
	I1018 11:45:46.250438   23351 main.go:141] libmachine: (ha-350733) DBG | domain ha-350733 has defined MAC address 52:54:00:cb:10:cf in network mk-ha-350733
	I1018 11:45:46.251128   23351 main.go:141] libmachine: (ha-350733) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:cb:10:cf", ip: ""} in network mk-ha-350733: {Iface:virbr1 ExpiryTime:2025-10-18 12:40:44 +0000 UTC Type:0 Mac:52:54:00:cb:10:cf Iaid: IPaddr:192.168.39.158 Prefix:24 Hostname:ha-350733 Clientid:01:52:54:00:cb:10:cf}
	I1018 11:45:46.251168   23351 main.go:141] libmachine: (ha-350733) DBG | domain ha-350733 has defined IP address 192.168.39.158 and MAC address 52:54:00:cb:10:cf in network mk-ha-350733
	I1018 11:45:46.251381   23351 main.go:141] libmachine: (ha-350733) Calling .GetSSHPort
	I1018 11:45:46.251577   23351 main.go:141] libmachine: (ha-350733) Calling .GetSSHKeyPath
	I1018 11:45:46.251726   23351 main.go:141] libmachine: (ha-350733) Calling .GetSSHUsername
	I1018 11:45:46.251875   23351 sshutil.go:53] new ssh client: &{IP:192.168.39.158 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/21647-6010/.minikube/machines/ha-350733/id_rsa Username:docker}
	I1018 11:45:46.338530   23351 ssh_runner.go:195] Run: systemctl --version
	I1018 11:45:46.345325   23351 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1018 11:45:46.363475   23351 kubeconfig.go:125] found "ha-350733" server: "https://192.168.39.254:8443"
	I1018 11:45:46.363520   23351 api_server.go:166] Checking apiserver status ...
	I1018 11:45:46.363572   23351 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1018 11:45:46.390388   23351 ssh_runner.go:195] Run: sudo egrep ^[0-9]+:freezer: /proc/2440/cgroup
	W1018 11:45:46.406284   23351 api_server.go:177] unable to find freezer cgroup: sudo egrep ^[0-9]+:freezer: /proc/2440/cgroup: Process exited with status 1
	stdout:
	
	stderr:
	I1018 11:45:46.406349   23351 ssh_runner.go:195] Run: ls
	I1018 11:45:46.414012   23351 api_server.go:253] Checking apiserver healthz at https://192.168.39.254:8443/healthz ...
	I1018 11:45:46.421598   23351 api_server.go:279] https://192.168.39.254:8443/healthz returned 200:
	ok
	I1018 11:45:46.421624   23351 status.go:463] ha-350733 apiserver status = Running (err=<nil>)
	I1018 11:45:46.421633   23351 status.go:176] ha-350733 status: &{Name:ha-350733 Host:Running Kubelet:Running APIServer:Running Kubeconfig:Configured Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I1018 11:45:46.421655   23351 status.go:174] checking status of ha-350733-m02 ...
	I1018 11:45:46.421945   23351 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_integration/out/docker-machine-driver-kvm2
	I1018 11:45:46.421979   23351 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1018 11:45:46.435971   23351 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:35589
	I1018 11:45:46.436465   23351 main.go:141] libmachine: () Calling .GetVersion
	I1018 11:45:46.436855   23351 main.go:141] libmachine: Using API Version  1
	I1018 11:45:46.436877   23351 main.go:141] libmachine: () Calling .SetConfigRaw
	I1018 11:45:46.437317   23351 main.go:141] libmachine: () Calling .GetMachineName
	I1018 11:45:46.437530   23351 main.go:141] libmachine: (ha-350733-m02) Calling .GetState
	I1018 11:45:46.439240   23351 status.go:371] ha-350733-m02 host status = "Stopped" (err=<nil>)
	I1018 11:45:46.439255   23351 status.go:384] host is not running, skipping remaining checks
	I1018 11:45:46.439261   23351 status.go:176] ha-350733-m02 status: &{Name:ha-350733-m02 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I1018 11:45:46.439309   23351 status.go:174] checking status of ha-350733-m03 ...
	I1018 11:45:46.439666   23351 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_integration/out/docker-machine-driver-kvm2
	I1018 11:45:46.439707   23351 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1018 11:45:46.453647   23351 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:34433
	I1018 11:45:46.454110   23351 main.go:141] libmachine: () Calling .GetVersion
	I1018 11:45:46.454516   23351 main.go:141] libmachine: Using API Version  1
	I1018 11:45:46.454540   23351 main.go:141] libmachine: () Calling .SetConfigRaw
	I1018 11:45:46.454939   23351 main.go:141] libmachine: () Calling .GetMachineName
	I1018 11:45:46.455147   23351 main.go:141] libmachine: (ha-350733-m03) Calling .GetState
	I1018 11:45:46.457359   23351 status.go:371] ha-350733-m03 host status = "Running" (err=<nil>)
	I1018 11:45:46.457376   23351 host.go:66] Checking if "ha-350733-m03" exists ...
	I1018 11:45:46.457725   23351 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_integration/out/docker-machine-driver-kvm2
	I1018 11:45:46.457775   23351 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1018 11:45:46.471104   23351 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:38565
	I1018 11:45:46.471650   23351 main.go:141] libmachine: () Calling .GetVersion
	I1018 11:45:46.472144   23351 main.go:141] libmachine: Using API Version  1
	I1018 11:45:46.472161   23351 main.go:141] libmachine: () Calling .SetConfigRaw
	I1018 11:45:46.472608   23351 main.go:141] libmachine: () Calling .GetMachineName
	I1018 11:45:46.472805   23351 main.go:141] libmachine: (ha-350733-m03) Calling .GetIP
	I1018 11:45:46.476570   23351 main.go:141] libmachine: (ha-350733-m03) DBG | domain ha-350733-m03 has defined MAC address 52:54:00:82:b3:4e in network mk-ha-350733
	I1018 11:45:46.477173   23351 main.go:141] libmachine: (ha-350733-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:82:b3:4e", ip: ""} in network mk-ha-350733: {Iface:virbr1 ExpiryTime:2025-10-18 12:42:57 +0000 UTC Type:0 Mac:52:54:00:82:b3:4e Iaid: IPaddr:192.168.39.117 Prefix:24 Hostname:ha-350733-m03 Clientid:01:52:54:00:82:b3:4e}
	I1018 11:45:46.477216   23351 main.go:141] libmachine: (ha-350733-m03) DBG | domain ha-350733-m03 has defined IP address 192.168.39.117 and MAC address 52:54:00:82:b3:4e in network mk-ha-350733
	I1018 11:45:46.477426   23351 host.go:66] Checking if "ha-350733-m03" exists ...
	I1018 11:45:46.477733   23351 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_integration/out/docker-machine-driver-kvm2
	I1018 11:45:46.477769   23351 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1018 11:45:46.492195   23351 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:33519
	I1018 11:45:46.492701   23351 main.go:141] libmachine: () Calling .GetVersion
	I1018 11:45:46.493118   23351 main.go:141] libmachine: Using API Version  1
	I1018 11:45:46.493143   23351 main.go:141] libmachine: () Calling .SetConfigRaw
	I1018 11:45:46.493499   23351 main.go:141] libmachine: () Calling .GetMachineName
	I1018 11:45:46.493767   23351 main.go:141] libmachine: (ha-350733-m03) Calling .DriverName
	I1018 11:45:46.494003   23351 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1018 11:45:46.494023   23351 main.go:141] libmachine: (ha-350733-m03) Calling .GetSSHHostname
	I1018 11:45:46.497610   23351 main.go:141] libmachine: (ha-350733-m03) DBG | domain ha-350733-m03 has defined MAC address 52:54:00:82:b3:4e in network mk-ha-350733
	I1018 11:45:46.498125   23351 main.go:141] libmachine: (ha-350733-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:82:b3:4e", ip: ""} in network mk-ha-350733: {Iface:virbr1 ExpiryTime:2025-10-18 12:42:57 +0000 UTC Type:0 Mac:52:54:00:82:b3:4e Iaid: IPaddr:192.168.39.117 Prefix:24 Hostname:ha-350733-m03 Clientid:01:52:54:00:82:b3:4e}
	I1018 11:45:46.498158   23351 main.go:141] libmachine: (ha-350733-m03) DBG | domain ha-350733-m03 has defined IP address 192.168.39.117 and MAC address 52:54:00:82:b3:4e in network mk-ha-350733
	I1018 11:45:46.498387   23351 main.go:141] libmachine: (ha-350733-m03) Calling .GetSSHPort
	I1018 11:45:46.498558   23351 main.go:141] libmachine: (ha-350733-m03) Calling .GetSSHKeyPath
	I1018 11:45:46.498722   23351 main.go:141] libmachine: (ha-350733-m03) Calling .GetSSHUsername
	I1018 11:45:46.498869   23351 sshutil.go:53] new ssh client: &{IP:192.168.39.117 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/21647-6010/.minikube/machines/ha-350733-m03/id_rsa Username:docker}
	I1018 11:45:46.583763   23351 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1018 11:45:46.604856   23351 kubeconfig.go:125] found "ha-350733" server: "https://192.168.39.254:8443"
	I1018 11:45:46.604885   23351 api_server.go:166] Checking apiserver status ...
	I1018 11:45:46.604920   23351 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1018 11:45:46.625302   23351 ssh_runner.go:195] Run: sudo egrep ^[0-9]+:freezer: /proc/2258/cgroup
	W1018 11:45:46.636837   23351 api_server.go:177] unable to find freezer cgroup: sudo egrep ^[0-9]+:freezer: /proc/2258/cgroup: Process exited with status 1
	stdout:
	
	stderr:
	I1018 11:45:46.636924   23351 ssh_runner.go:195] Run: ls
	I1018 11:45:46.642374   23351 api_server.go:253] Checking apiserver healthz at https://192.168.39.254:8443/healthz ...
	I1018 11:45:46.648339   23351 api_server.go:279] https://192.168.39.254:8443/healthz returned 200:
	ok
	I1018 11:45:46.648380   23351 status.go:463] ha-350733-m03 apiserver status = Running (err=<nil>)
	I1018 11:45:46.648390   23351 status.go:176] ha-350733-m03 status: &{Name:ha-350733-m03 Host:Running Kubelet:Running APIServer:Running Kubeconfig:Configured Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I1018 11:45:46.648405   23351 status.go:174] checking status of ha-350733-m04 ...
	I1018 11:45:46.648687   23351 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_integration/out/docker-machine-driver-kvm2
	I1018 11:45:46.648729   23351 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1018 11:45:46.666951   23351 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:34361
	I1018 11:45:46.667401   23351 main.go:141] libmachine: () Calling .GetVersion
	I1018 11:45:46.667864   23351 main.go:141] libmachine: Using API Version  1
	I1018 11:45:46.667889   23351 main.go:141] libmachine: () Calling .SetConfigRaw
	I1018 11:45:46.668226   23351 main.go:141] libmachine: () Calling .GetMachineName
	I1018 11:45:46.668509   23351 main.go:141] libmachine: (ha-350733-m04) Calling .GetState
	I1018 11:45:46.670666   23351 status.go:371] ha-350733-m04 host status = "Running" (err=<nil>)
	I1018 11:45:46.670688   23351 host.go:66] Checking if "ha-350733-m04" exists ...
	I1018 11:45:46.671064   23351 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_integration/out/docker-machine-driver-kvm2
	I1018 11:45:46.671102   23351 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1018 11:45:46.685605   23351 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:43329
	I1018 11:45:46.686115   23351 main.go:141] libmachine: () Calling .GetVersion
	I1018 11:45:46.686638   23351 main.go:141] libmachine: Using API Version  1
	I1018 11:45:46.686663   23351 main.go:141] libmachine: () Calling .SetConfigRaw
	I1018 11:45:46.686989   23351 main.go:141] libmachine: () Calling .GetMachineName
	I1018 11:45:46.687239   23351 main.go:141] libmachine: (ha-350733-m04) Calling .GetIP
	I1018 11:45:46.690642   23351 main.go:141] libmachine: (ha-350733-m04) DBG | domain ha-350733-m04 has defined MAC address 52:54:00:98:6e:5d in network mk-ha-350733
	I1018 11:45:46.691236   23351 main.go:141] libmachine: (ha-350733-m04) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:98:6e:5d", ip: ""} in network mk-ha-350733: {Iface:virbr1 ExpiryTime:2025-10-18 12:44:37 +0000 UTC Type:0 Mac:52:54:00:98:6e:5d Iaid: IPaddr:192.168.39.99 Prefix:24 Hostname:ha-350733-m04 Clientid:01:52:54:00:98:6e:5d}
	I1018 11:45:46.691267   23351 main.go:141] libmachine: (ha-350733-m04) DBG | domain ha-350733-m04 has defined IP address 192.168.39.99 and MAC address 52:54:00:98:6e:5d in network mk-ha-350733
	I1018 11:45:46.691411   23351 host.go:66] Checking if "ha-350733-m04" exists ...
	I1018 11:45:46.691807   23351 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_integration/out/docker-machine-driver-kvm2
	I1018 11:45:46.691859   23351 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1018 11:45:46.705788   23351 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:46253
	I1018 11:45:46.706235   23351 main.go:141] libmachine: () Calling .GetVersion
	I1018 11:45:46.706732   23351 main.go:141] libmachine: Using API Version  1
	I1018 11:45:46.706758   23351 main.go:141] libmachine: () Calling .SetConfigRaw
	I1018 11:45:46.707151   23351 main.go:141] libmachine: () Calling .GetMachineName
	I1018 11:45:46.707387   23351 main.go:141] libmachine: (ha-350733-m04) Calling .DriverName
	I1018 11:45:46.707602   23351 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1018 11:45:46.707624   23351 main.go:141] libmachine: (ha-350733-m04) Calling .GetSSHHostname
	I1018 11:45:46.711536   23351 main.go:141] libmachine: (ha-350733-m04) DBG | domain ha-350733-m04 has defined MAC address 52:54:00:98:6e:5d in network mk-ha-350733
	I1018 11:45:46.712110   23351 main.go:141] libmachine: (ha-350733-m04) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:98:6e:5d", ip: ""} in network mk-ha-350733: {Iface:virbr1 ExpiryTime:2025-10-18 12:44:37 +0000 UTC Type:0 Mac:52:54:00:98:6e:5d Iaid: IPaddr:192.168.39.99 Prefix:24 Hostname:ha-350733-m04 Clientid:01:52:54:00:98:6e:5d}
	I1018 11:45:46.712141   23351 main.go:141] libmachine: (ha-350733-m04) DBG | domain ha-350733-m04 has defined IP address 192.168.39.99 and MAC address 52:54:00:98:6e:5d in network mk-ha-350733
	I1018 11:45:46.712438   23351 main.go:141] libmachine: (ha-350733-m04) Calling .GetSSHPort
	I1018 11:45:46.712654   23351 main.go:141] libmachine: (ha-350733-m04) Calling .GetSSHKeyPath
	I1018 11:45:46.712813   23351 main.go:141] libmachine: (ha-350733-m04) Calling .GetSSHUsername
	I1018 11:45:46.713096   23351 sshutil.go:53] new ssh client: &{IP:192.168.39.99 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/21647-6010/.minikube/machines/ha-350733-m04/id_rsa Username:docker}
	I1018 11:45:46.800849   23351 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1018 11:45:46.825147   23351 status.go:176] ha-350733-m04 status: &{Name:ha-350733-m04 Host:Running Kubelet:Running APIServer:Irrelevant Kubeconfig:Irrelevant Worker:true TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
--- PASS: TestMultiControlPlane/serial/StopSecondaryNode (13.94s)

                                                
                                    
x
+
TestMultiControlPlane/serial/DegradedAfterControlPlaneNodeStop (0.71s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/DegradedAfterControlPlaneNodeStop
ha_test.go:392: (dbg) Run:  out/minikube-linux-amd64 profile list --output json
--- PASS: TestMultiControlPlane/serial/DegradedAfterControlPlaneNodeStop (0.71s)

                                                
                                    
x
+
TestMultiControlPlane/serial/RestartSecondaryNode (24.52s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/RestartSecondaryNode
ha_test.go:422: (dbg) Run:  out/minikube-linux-amd64 -p ha-350733 node start m02 --alsologtostderr -v 5
E1018 11:45:57.785658    9909 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21647-6010/.minikube/profiles/functional-897621/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
ha_test.go:422: (dbg) Done: out/minikube-linux-amd64 -p ha-350733 node start m02 --alsologtostderr -v 5: (23.421281898s)
ha_test.go:430: (dbg) Run:  out/minikube-linux-amd64 -p ha-350733 status --alsologtostderr -v 5
ha_test.go:430: (dbg) Done: out/minikube-linux-amd64 -p ha-350733 status --alsologtostderr -v 5: (1.016477983s)
ha_test.go:450: (dbg) Run:  kubectl get nodes
--- PASS: TestMultiControlPlane/serial/RestartSecondaryNode (24.52s)

                                                
                                    
x
+
TestMultiControlPlane/serial/HAppyAfterSecondaryNodeRestart (1.03s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/HAppyAfterSecondaryNodeRestart
ha_test.go:281: (dbg) Run:  out/minikube-linux-amd64 profile list --output json
ha_test.go:281: (dbg) Done: out/minikube-linux-amd64 profile list --output json: (1.028936527s)
--- PASS: TestMultiControlPlane/serial/HAppyAfterSecondaryNodeRestart (1.03s)

                                                
                                    
x
+
TestMultiControlPlane/serial/RestartClusterKeepsNodes (175.42s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/RestartClusterKeepsNodes
ha_test.go:458: (dbg) Run:  out/minikube-linux-amd64 -p ha-350733 node list --alsologtostderr -v 5
ha_test.go:464: (dbg) Run:  out/minikube-linux-amd64 -p ha-350733 stop --alsologtostderr -v 5
ha_test.go:464: (dbg) Done: out/minikube-linux-amd64 -p ha-350733 stop --alsologtostderr -v 5: (42.797890275s)
ha_test.go:469: (dbg) Run:  out/minikube-linux-amd64 -p ha-350733 start --wait true --alsologtostderr -v 5
E1018 11:47:19.707858    9909 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21647-6010/.minikube/profiles/functional-897621/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1018 11:47:45.860839    9909 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21647-6010/.minikube/profiles/addons-886198/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
ha_test.go:469: (dbg) Done: out/minikube-linux-amd64 -p ha-350733 start --wait true --alsologtostderr -v 5: (2m12.497849745s)
ha_test.go:474: (dbg) Run:  out/minikube-linux-amd64 -p ha-350733 node list --alsologtostderr -v 5
--- PASS: TestMultiControlPlane/serial/RestartClusterKeepsNodes (175.42s)

                                                
                                    
x
+
TestMultiControlPlane/serial/DeleteSecondaryNode (7.87s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/DeleteSecondaryNode
ha_test.go:489: (dbg) Run:  out/minikube-linux-amd64 -p ha-350733 node delete m03 --alsologtostderr -v 5
ha_test.go:489: (dbg) Done: out/minikube-linux-amd64 -p ha-350733 node delete m03 --alsologtostderr -v 5: (7.037024234s)
ha_test.go:495: (dbg) Run:  out/minikube-linux-amd64 -p ha-350733 status --alsologtostderr -v 5
ha_test.go:513: (dbg) Run:  kubectl get nodes
ha_test.go:521: (dbg) Run:  kubectl get nodes -o "go-template='{{range .items}}{{range .status.conditions}}{{if eq .type "Ready"}} {{.status}}{{"\n"}}{{end}}{{end}}{{end}}'"
--- PASS: TestMultiControlPlane/serial/DeleteSecondaryNode (7.87s)

                                                
                                    
x
+
TestMultiControlPlane/serial/DegradedAfterSecondaryNodeDelete (0.68s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/DegradedAfterSecondaryNodeDelete
ha_test.go:392: (dbg) Run:  out/minikube-linux-amd64 profile list --output json
--- PASS: TestMultiControlPlane/serial/DegradedAfterSecondaryNodeDelete (0.68s)

                                                
                                    
x
+
TestMultiControlPlane/serial/StopCluster (40.74s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/StopCluster
ha_test.go:533: (dbg) Run:  out/minikube-linux-amd64 -p ha-350733 stop --alsologtostderr -v 5
E1018 11:49:35.846031    9909 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21647-6010/.minikube/profiles/functional-897621/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
ha_test.go:533: (dbg) Done: out/minikube-linux-amd64 -p ha-350733 stop --alsologtostderr -v 5: (40.623881779s)
ha_test.go:539: (dbg) Run:  out/minikube-linux-amd64 -p ha-350733 status --alsologtostderr -v 5
ha_test.go:539: (dbg) Non-zero exit: out/minikube-linux-amd64 -p ha-350733 status --alsologtostderr -v 5: exit status 7 (115.305979ms)

                                                
                                                
-- stdout --
	ha-350733
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	
	ha-350733-m02
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	
	ha-350733-m04
	type: Worker
	host: Stopped
	kubelet: Stopped
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1018 11:49:57.725416   25561 out.go:360] Setting OutFile to fd 1 ...
	I1018 11:49:57.725629   25561 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1018 11:49:57.725653   25561 out.go:374] Setting ErrFile to fd 2...
	I1018 11:49:57.725664   25561 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1018 11:49:57.725871   25561 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21647-6010/.minikube/bin
	I1018 11:49:57.726083   25561 out.go:368] Setting JSON to false
	I1018 11:49:57.726109   25561 mustload.go:65] Loading cluster: ha-350733
	I1018 11:49:57.726160   25561 notify.go:220] Checking for updates...
	I1018 11:49:57.726514   25561 config.go:182] Loaded profile config "ha-350733": Driver=kvm2, ContainerRuntime=docker, KubernetesVersion=v1.34.1
	I1018 11:49:57.726528   25561 status.go:174] checking status of ha-350733 ...
	I1018 11:49:57.726898   25561 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_integration/out/docker-machine-driver-kvm2
	I1018 11:49:57.726946   25561 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1018 11:49:57.751180   25561 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:44297
	I1018 11:49:57.751790   25561 main.go:141] libmachine: () Calling .GetVersion
	I1018 11:49:57.752558   25561 main.go:141] libmachine: Using API Version  1
	I1018 11:49:57.752584   25561 main.go:141] libmachine: () Calling .SetConfigRaw
	I1018 11:49:57.753035   25561 main.go:141] libmachine: () Calling .GetMachineName
	I1018 11:49:57.753319   25561 main.go:141] libmachine: (ha-350733) Calling .GetState
	I1018 11:49:57.755681   25561 status.go:371] ha-350733 host status = "Stopped" (err=<nil>)
	I1018 11:49:57.755706   25561 status.go:384] host is not running, skipping remaining checks
	I1018 11:49:57.755716   25561 status.go:176] ha-350733 status: &{Name:ha-350733 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I1018 11:49:57.755741   25561 status.go:174] checking status of ha-350733-m02 ...
	I1018 11:49:57.756059   25561 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_integration/out/docker-machine-driver-kvm2
	I1018 11:49:57.756121   25561 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1018 11:49:57.770175   25561 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:42283
	I1018 11:49:57.770633   25561 main.go:141] libmachine: () Calling .GetVersion
	I1018 11:49:57.771114   25561 main.go:141] libmachine: Using API Version  1
	I1018 11:49:57.771144   25561 main.go:141] libmachine: () Calling .SetConfigRaw
	I1018 11:49:57.771650   25561 main.go:141] libmachine: () Calling .GetMachineName
	I1018 11:49:57.771875   25561 main.go:141] libmachine: (ha-350733-m02) Calling .GetState
	I1018 11:49:57.774193   25561 status.go:371] ha-350733-m02 host status = "Stopped" (err=<nil>)
	I1018 11:49:57.774209   25561 status.go:384] host is not running, skipping remaining checks
	I1018 11:49:57.774214   25561 status.go:176] ha-350733-m02 status: &{Name:ha-350733-m02 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I1018 11:49:57.774243   25561 status.go:174] checking status of ha-350733-m04 ...
	I1018 11:49:57.774566   25561 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_integration/out/docker-machine-driver-kvm2
	I1018 11:49:57.774609   25561 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1018 11:49:57.788544   25561 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:34659
	I1018 11:49:57.788966   25561 main.go:141] libmachine: () Calling .GetVersion
	I1018 11:49:57.789469   25561 main.go:141] libmachine: Using API Version  1
	I1018 11:49:57.789493   25561 main.go:141] libmachine: () Calling .SetConfigRaw
	I1018 11:49:57.789824   25561 main.go:141] libmachine: () Calling .GetMachineName
	I1018 11:49:57.790032   25561 main.go:141] libmachine: (ha-350733-m04) Calling .GetState
	I1018 11:49:57.792007   25561 status.go:371] ha-350733-m04 host status = "Stopped" (err=<nil>)
	I1018 11:49:57.792030   25561 status.go:384] host is not running, skipping remaining checks
	I1018 11:49:57.792038   25561 status.go:176] ha-350733-m04 status: &{Name:ha-350733-m04 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:true TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
--- PASS: TestMultiControlPlane/serial/StopCluster (40.74s)

                                                
                                    
x
+
TestMultiControlPlane/serial/RestartCluster (120.64s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/RestartCluster
ha_test.go:562: (dbg) Run:  out/minikube-linux-amd64 -p ha-350733 start --wait true --alsologtostderr -v 5 --driver=kvm2  --auto-update-drivers=false
E1018 11:50:03.553091    9909 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21647-6010/.minikube/profiles/functional-897621/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
ha_test.go:562: (dbg) Done: out/minikube-linux-amd64 -p ha-350733 start --wait true --alsologtostderr -v 5 --driver=kvm2  --auto-update-drivers=false: (1m59.792402866s)
ha_test.go:568: (dbg) Run:  out/minikube-linux-amd64 -p ha-350733 status --alsologtostderr -v 5
ha_test.go:586: (dbg) Run:  kubectl get nodes
ha_test.go:594: (dbg) Run:  kubectl get nodes -o "go-template='{{range .items}}{{range .status.conditions}}{{if eq .type "Ready"}} {{.status}}{{"\n"}}{{end}}{{end}}{{end}}'"
--- PASS: TestMultiControlPlane/serial/RestartCluster (120.64s)

                                                
                                    
x
+
TestMultiControlPlane/serial/DegradedAfterClusterRestart (0.68s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/DegradedAfterClusterRestart
ha_test.go:392: (dbg) Run:  out/minikube-linux-amd64 profile list --output json
--- PASS: TestMultiControlPlane/serial/DegradedAfterClusterRestart (0.68s)

                                                
                                    
x
+
TestMultiControlPlane/serial/AddSecondaryNode (83.46s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/AddSecondaryNode
ha_test.go:607: (dbg) Run:  out/minikube-linux-amd64 -p ha-350733 node add --control-plane --alsologtostderr -v 5
E1018 11:52:45.860822    9909 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21647-6010/.minikube/profiles/addons-886198/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
ha_test.go:607: (dbg) Done: out/minikube-linux-amd64 -p ha-350733 node add --control-plane --alsologtostderr -v 5: (1m22.543351334s)
ha_test.go:613: (dbg) Run:  out/minikube-linux-amd64 -p ha-350733 status --alsologtostderr -v 5
--- PASS: TestMultiControlPlane/serial/AddSecondaryNode (83.46s)

                                                
                                    
x
+
TestMultiControlPlane/serial/HAppyAfterSecondaryNodeAdd (0.93s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/HAppyAfterSecondaryNodeAdd
ha_test.go:281: (dbg) Run:  out/minikube-linux-amd64 profile list --output json
--- PASS: TestMultiControlPlane/serial/HAppyAfterSecondaryNodeAdd (0.93s)

                                                
                                    
x
+
TestImageBuild/serial/Setup (43.45s)

                                                
                                                
=== RUN   TestImageBuild/serial/Setup
image_test.go:69: (dbg) Run:  out/minikube-linux-amd64 start -p image-397615 --driver=kvm2  --auto-update-drivers=false
E1018 11:54:08.939831    9909 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21647-6010/.minikube/profiles/addons-886198/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
image_test.go:69: (dbg) Done: out/minikube-linux-amd64 start -p image-397615 --driver=kvm2  --auto-update-drivers=false: (43.449431238s)
--- PASS: TestImageBuild/serial/Setup (43.45s)

                                                
                                    
x
+
TestImageBuild/serial/NormalBuild (1.54s)

                                                
                                                
=== RUN   TestImageBuild/serial/NormalBuild
image_test.go:78: (dbg) Run:  out/minikube-linux-amd64 image build -t aaa:latest ./testdata/image-build/test-normal -p image-397615
image_test.go:78: (dbg) Done: out/minikube-linux-amd64 image build -t aaa:latest ./testdata/image-build/test-normal -p image-397615: (1.539624264s)
--- PASS: TestImageBuild/serial/NormalBuild (1.54s)

                                                
                                    
x
+
TestImageBuild/serial/BuildWithBuildArg (0.96s)

                                                
                                                
=== RUN   TestImageBuild/serial/BuildWithBuildArg
image_test.go:99: (dbg) Run:  out/minikube-linux-amd64 image build -t aaa:latest --build-opt=build-arg=ENV_A=test_env_str --build-opt=no-cache ./testdata/image-build/test-arg -p image-397615
--- PASS: TestImageBuild/serial/BuildWithBuildArg (0.96s)

                                                
                                    
x
+
TestImageBuild/serial/BuildWithDockerIgnore (0.78s)

                                                
                                                
=== RUN   TestImageBuild/serial/BuildWithDockerIgnore
image_test.go:133: (dbg) Run:  out/minikube-linux-amd64 image build -t aaa:latest ./testdata/image-build/test-normal --build-opt=no-cache -p image-397615
--- PASS: TestImageBuild/serial/BuildWithDockerIgnore (0.78s)

                                                
                                    
x
+
TestImageBuild/serial/BuildWithSpecifiedDockerfile (0.74s)

                                                
                                                
=== RUN   TestImageBuild/serial/BuildWithSpecifiedDockerfile
image_test.go:88: (dbg) Run:  out/minikube-linux-amd64 image build -t aaa:latest -f inner/Dockerfile ./testdata/image-build/test-f -p image-397615
--- PASS: TestImageBuild/serial/BuildWithSpecifiedDockerfile (0.74s)

                                                
                                    
x
+
TestJSONOutput/start/Command (88.46s)

                                                
                                                
=== RUN   TestJSONOutput/start/Command
json_output_test.go:63: (dbg) Run:  out/minikube-linux-amd64 start -p json-output-492543 --output=json --user=testUser --memory=3072 --wait=true --driver=kvm2  --auto-update-drivers=false
E1018 11:54:35.854930    9909 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21647-6010/.minikube/profiles/functional-897621/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
json_output_test.go:63: (dbg) Done: out/minikube-linux-amd64 start -p json-output-492543 --output=json --user=testUser --memory=3072 --wait=true --driver=kvm2  --auto-update-drivers=false: (1m28.463066361s)
--- PASS: TestJSONOutput/start/Command (88.46s)

                                                
                                    
x
+
TestJSONOutput/start/Audit (0s)

                                                
                                                
=== RUN   TestJSONOutput/start/Audit
--- PASS: TestJSONOutput/start/Audit (0.00s)

                                                
                                    
x
+
TestJSONOutput/start/parallel/DistinctCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/start/parallel/DistinctCurrentSteps
=== PAUSE TestJSONOutput/start/parallel/DistinctCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/start/parallel/DistinctCurrentSteps
--- PASS: TestJSONOutput/start/parallel/DistinctCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/start/parallel/IncreasingCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/start/parallel/IncreasingCurrentSteps
=== PAUSE TestJSONOutput/start/parallel/IncreasingCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/start/parallel/IncreasingCurrentSteps
--- PASS: TestJSONOutput/start/parallel/IncreasingCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/pause/Command (0.64s)

                                                
                                                
=== RUN   TestJSONOutput/pause/Command
json_output_test.go:63: (dbg) Run:  out/minikube-linux-amd64 pause -p json-output-492543 --output=json --user=testUser
--- PASS: TestJSONOutput/pause/Command (0.64s)

                                                
                                    
x
+
TestJSONOutput/pause/Audit (0s)

                                                
                                                
=== RUN   TestJSONOutput/pause/Audit
--- PASS: TestJSONOutput/pause/Audit (0.00s)

                                                
                                    
x
+
TestJSONOutput/pause/parallel/DistinctCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/pause/parallel/DistinctCurrentSteps
=== PAUSE TestJSONOutput/pause/parallel/DistinctCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/pause/parallel/DistinctCurrentSteps
--- PASS: TestJSONOutput/pause/parallel/DistinctCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/pause/parallel/IncreasingCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/pause/parallel/IncreasingCurrentSteps
=== PAUSE TestJSONOutput/pause/parallel/IncreasingCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/pause/parallel/IncreasingCurrentSteps
--- PASS: TestJSONOutput/pause/parallel/IncreasingCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/unpause/Command (0.62s)

                                                
                                                
=== RUN   TestJSONOutput/unpause/Command
json_output_test.go:63: (dbg) Run:  out/minikube-linux-amd64 unpause -p json-output-492543 --output=json --user=testUser
--- PASS: TestJSONOutput/unpause/Command (0.62s)

                                                
                                    
x
+
TestJSONOutput/unpause/Audit (0s)

                                                
                                                
=== RUN   TestJSONOutput/unpause/Audit
--- PASS: TestJSONOutput/unpause/Audit (0.00s)

                                                
                                    
x
+
TestJSONOutput/unpause/parallel/DistinctCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/unpause/parallel/DistinctCurrentSteps
=== PAUSE TestJSONOutput/unpause/parallel/DistinctCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/unpause/parallel/DistinctCurrentSteps
--- PASS: TestJSONOutput/unpause/parallel/DistinctCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/unpause/parallel/IncreasingCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/unpause/parallel/IncreasingCurrentSteps
=== PAUSE TestJSONOutput/unpause/parallel/IncreasingCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/unpause/parallel/IncreasingCurrentSteps
--- PASS: TestJSONOutput/unpause/parallel/IncreasingCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/stop/Command (6.78s)

                                                
                                                
=== RUN   TestJSONOutput/stop/Command
json_output_test.go:63: (dbg) Run:  out/minikube-linux-amd64 stop -p json-output-492543 --output=json --user=testUser
json_output_test.go:63: (dbg) Done: out/minikube-linux-amd64 stop -p json-output-492543 --output=json --user=testUser: (6.777175508s)
--- PASS: TestJSONOutput/stop/Command (6.78s)

                                                
                                    
x
+
TestJSONOutput/stop/Audit (0s)

                                                
                                                
=== RUN   TestJSONOutput/stop/Audit
--- PASS: TestJSONOutput/stop/Audit (0.00s)

                                                
                                    
x
+
TestJSONOutput/stop/parallel/DistinctCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/stop/parallel/DistinctCurrentSteps
=== PAUSE TestJSONOutput/stop/parallel/DistinctCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/stop/parallel/DistinctCurrentSteps
--- PASS: TestJSONOutput/stop/parallel/DistinctCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/stop/parallel/IncreasingCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/stop/parallel/IncreasingCurrentSteps
=== PAUSE TestJSONOutput/stop/parallel/IncreasingCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/stop/parallel/IncreasingCurrentSteps
--- PASS: TestJSONOutput/stop/parallel/IncreasingCurrentSteps (0.00s)

                                                
                                    
x
+
TestErrorJSONOutput (0.22s)

                                                
                                                
=== RUN   TestErrorJSONOutput
json_output_test.go:160: (dbg) Run:  out/minikube-linux-amd64 start -p json-output-error-236814 --memory=3072 --output=json --wait=true --driver=fail
json_output_test.go:160: (dbg) Non-zero exit: out/minikube-linux-amd64 start -p json-output-error-236814 --memory=3072 --output=json --wait=true --driver=fail: exit status 56 (75.194874ms)

                                                
                                                
-- stdout --
	{"specversion":"1.0","id":"3612f2f1-6411-42e8-9584-b71ed209c6f4","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"0","message":"[json-output-error-236814] minikube v1.37.0 on Ubuntu 22.04 (kvm/amd64)","name":"Initial Minikube Setup","totalsteps":"19"}}
	{"specversion":"1.0","id":"8b62f98b-4ad8-499b-a40a-5d44854bc9e9","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_LOCATION=21647"}}
	{"specversion":"1.0","id":"734dfeec-3cd0-4fd4-b972-cffc6a3acb5d","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true"}}
	{"specversion":"1.0","id":"410b496a-02ff-4623-bfc6-8268c49bfbf9","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"KUBECONFIG=/home/jenkins/minikube-integration/21647-6010/kubeconfig"}}
	{"specversion":"1.0","id":"73361c7b-750a-4381-8ee3-f95f1976f1ca","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_HOME=/home/jenkins/minikube-integration/21647-6010/.minikube"}}
	{"specversion":"1.0","id":"e3a29635-f641-40df-a9aa-0ebadcc035ca","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_BIN=out/minikube-linux-amd64"}}
	{"specversion":"1.0","id":"f300a12b-b215-4024-b4b7-ea590dec84f8","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_FORCE_SYSTEMD="}}
	{"specversion":"1.0","id":"07e412ef-9169-4c3e-b9ba-5bd38c1b535a","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.error","datacontenttype":"application/json","data":{"advice":"","exitcode":"56","issues":"","message":"The driver 'fail' is not supported on linux/amd64","name":"DRV_UNSUPPORTED_OS","url":""}}

                                                
                                                
-- /stdout --
helpers_test.go:175: Cleaning up "json-output-error-236814" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p json-output-error-236814
--- PASS: TestErrorJSONOutput (0.22s)

                                                
                                    
x
+
TestMainNoArgs (0.05s)

                                                
                                                
=== RUN   TestMainNoArgs
main_test.go:70: (dbg) Run:  out/minikube-linux-amd64
--- PASS: TestMainNoArgs (0.05s)

                                                
                                    
x
+
TestMinikubeProfile (90.73s)

                                                
                                                
=== RUN   TestMinikubeProfile
minikube_profile_test.go:44: (dbg) Run:  out/minikube-linux-amd64 start -p first-786293 --driver=kvm2  --auto-update-drivers=false
minikube_profile_test.go:44: (dbg) Done: out/minikube-linux-amd64 start -p first-786293 --driver=kvm2  --auto-update-drivers=false: (42.924317143s)
minikube_profile_test.go:44: (dbg) Run:  out/minikube-linux-amd64 start -p second-788464 --driver=kvm2  --auto-update-drivers=false
minikube_profile_test.go:44: (dbg) Done: out/minikube-linux-amd64 start -p second-788464 --driver=kvm2  --auto-update-drivers=false: (44.909673894s)
minikube_profile_test.go:51: (dbg) Run:  out/minikube-linux-amd64 profile first-786293
minikube_profile_test.go:55: (dbg) Run:  out/minikube-linux-amd64 profile list -ojson
minikube_profile_test.go:51: (dbg) Run:  out/minikube-linux-amd64 profile second-788464
minikube_profile_test.go:55: (dbg) Run:  out/minikube-linux-amd64 profile list -ojson
helpers_test.go:175: Cleaning up "second-788464" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p second-788464
helpers_test.go:175: Cleaning up "first-786293" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p first-786293
--- PASS: TestMinikubeProfile (90.73s)

                                                
                                    
x
+
TestMountStart/serial/StartWithMountFirst (22.97s)

                                                
                                                
=== RUN   TestMountStart/serial/StartWithMountFirst
mount_start_test.go:118: (dbg) Run:  out/minikube-linux-amd64 start -p mount-start-1-333739 --memory=3072 --mount-string /tmp/TestMountStartserial1972843747/001:/minikube-host --mount-gid 0 --mount-msize 6543 --mount-port 46464 --mount-uid 0 --no-kubernetes --driver=kvm2  --auto-update-drivers=false
mount_start_test.go:118: (dbg) Done: out/minikube-linux-amd64 start -p mount-start-1-333739 --memory=3072 --mount-string /tmp/TestMountStartserial1972843747/001:/minikube-host --mount-gid 0 --mount-msize 6543 --mount-port 46464 --mount-uid 0 --no-kubernetes --driver=kvm2  --auto-update-drivers=false: (21.968525359s)
E1018 11:57:45.860407    9909 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21647-6010/.minikube/profiles/addons-886198/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
--- PASS: TestMountStart/serial/StartWithMountFirst (22.97s)

                                                
                                    
x
+
TestMountStart/serial/VerifyMountFirst (0.39s)

                                                
                                                
=== RUN   TestMountStart/serial/VerifyMountFirst
mount_start_test.go:134: (dbg) Run:  out/minikube-linux-amd64 -p mount-start-1-333739 ssh -- ls /minikube-host
mount_start_test.go:147: (dbg) Run:  out/minikube-linux-amd64 -p mount-start-1-333739 ssh -- findmnt --json /minikube-host
--- PASS: TestMountStart/serial/VerifyMountFirst (0.39s)

                                                
                                    
x
+
TestMountStart/serial/StartWithMountSecond (24.19s)

                                                
                                                
=== RUN   TestMountStart/serial/StartWithMountSecond
mount_start_test.go:118: (dbg) Run:  out/minikube-linux-amd64 start -p mount-start-2-346411 --memory=3072 --mount-string /tmp/TestMountStartserial1972843747/001:/minikube-host --mount-gid 0 --mount-msize 6543 --mount-port 46465 --mount-uid 0 --no-kubernetes --driver=kvm2  --auto-update-drivers=false
mount_start_test.go:118: (dbg) Done: out/minikube-linux-amd64 start -p mount-start-2-346411 --memory=3072 --mount-string /tmp/TestMountStartserial1972843747/001:/minikube-host --mount-gid 0 --mount-msize 6543 --mount-port 46465 --mount-uid 0 --no-kubernetes --driver=kvm2  --auto-update-drivers=false: (23.186040741s)
--- PASS: TestMountStart/serial/StartWithMountSecond (24.19s)

                                                
                                    
x
+
TestMountStart/serial/VerifyMountSecond (0.39s)

                                                
                                                
=== RUN   TestMountStart/serial/VerifyMountSecond
mount_start_test.go:134: (dbg) Run:  out/minikube-linux-amd64 -p mount-start-2-346411 ssh -- ls /minikube-host
mount_start_test.go:147: (dbg) Run:  out/minikube-linux-amd64 -p mount-start-2-346411 ssh -- findmnt --json /minikube-host
--- PASS: TestMountStart/serial/VerifyMountSecond (0.39s)

                                                
                                    
x
+
TestMountStart/serial/DeleteFirst (0.75s)

                                                
                                                
=== RUN   TestMountStart/serial/DeleteFirst
pause_test.go:132: (dbg) Run:  out/minikube-linux-amd64 delete -p mount-start-1-333739 --alsologtostderr -v=5
--- PASS: TestMountStart/serial/DeleteFirst (0.75s)

                                                
                                    
x
+
TestMountStart/serial/VerifyMountPostDelete (0.39s)

                                                
                                                
=== RUN   TestMountStart/serial/VerifyMountPostDelete
mount_start_test.go:134: (dbg) Run:  out/minikube-linux-amd64 -p mount-start-2-346411 ssh -- ls /minikube-host
mount_start_test.go:147: (dbg) Run:  out/minikube-linux-amd64 -p mount-start-2-346411 ssh -- findmnt --json /minikube-host
--- PASS: TestMountStart/serial/VerifyMountPostDelete (0.39s)

                                                
                                    
x
+
TestMountStart/serial/Stop (1.31s)

                                                
                                                
=== RUN   TestMountStart/serial/Stop
mount_start_test.go:196: (dbg) Run:  out/minikube-linux-amd64 stop -p mount-start-2-346411
mount_start_test.go:196: (dbg) Done: out/minikube-linux-amd64 stop -p mount-start-2-346411: (1.311516392s)
--- PASS: TestMountStart/serial/Stop (1.31s)

                                                
                                    
x
+
TestMountStart/serial/RestartStopped (21.74s)

                                                
                                                
=== RUN   TestMountStart/serial/RestartStopped
mount_start_test.go:207: (dbg) Run:  out/minikube-linux-amd64 start -p mount-start-2-346411
mount_start_test.go:207: (dbg) Done: out/minikube-linux-amd64 start -p mount-start-2-346411: (20.743835168s)
--- PASS: TestMountStart/serial/RestartStopped (21.74s)

                                                
                                    
x
+
TestMountStart/serial/VerifyMountPostStop (0.39s)

                                                
                                                
=== RUN   TestMountStart/serial/VerifyMountPostStop
mount_start_test.go:134: (dbg) Run:  out/minikube-linux-amd64 -p mount-start-2-346411 ssh -- ls /minikube-host
mount_start_test.go:147: (dbg) Run:  out/minikube-linux-amd64 -p mount-start-2-346411 ssh -- findmnt --json /minikube-host
--- PASS: TestMountStart/serial/VerifyMountPostStop (0.39s)

                                                
                                    
x
+
TestMultiNode/serial/FreshStart2Nodes (116.68s)

                                                
                                                
=== RUN   TestMultiNode/serial/FreshStart2Nodes
multinode_test.go:96: (dbg) Run:  out/minikube-linux-amd64 start -p multinode-480105 --wait=true --memory=3072 --nodes=2 -v=5 --alsologtostderr --driver=kvm2  --auto-update-drivers=false
E1018 11:59:35.845182    9909 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21647-6010/.minikube/profiles/functional-897621/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
multinode_test.go:96: (dbg) Done: out/minikube-linux-amd64 start -p multinode-480105 --wait=true --memory=3072 --nodes=2 -v=5 --alsologtostderr --driver=kvm2  --auto-update-drivers=false: (1m56.237504791s)
multinode_test.go:102: (dbg) Run:  out/minikube-linux-amd64 -p multinode-480105 status --alsologtostderr
--- PASS: TestMultiNode/serial/FreshStart2Nodes (116.68s)

                                                
                                    
x
+
TestMultiNode/serial/DeployApp2Nodes (5.51s)

                                                
                                                
=== RUN   TestMultiNode/serial/DeployApp2Nodes
multinode_test.go:493: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-480105 -- apply -f ./testdata/multinodes/multinode-pod-dns-test.yaml
multinode_test.go:498: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-480105 -- rollout status deployment/busybox
multinode_test.go:498: (dbg) Done: out/minikube-linux-amd64 kubectl -p multinode-480105 -- rollout status deployment/busybox: (3.85011848s)
multinode_test.go:505: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-480105 -- get pods -o jsonpath='{.items[*].status.podIP}'
multinode_test.go:528: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-480105 -- get pods -o jsonpath='{.items[*].metadata.name}'
multinode_test.go:536: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-480105 -- exec busybox-7b57f96db7-pvx5x -- nslookup kubernetes.io
multinode_test.go:536: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-480105 -- exec busybox-7b57f96db7-qthn2 -- nslookup kubernetes.io
multinode_test.go:546: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-480105 -- exec busybox-7b57f96db7-pvx5x -- nslookup kubernetes.default
multinode_test.go:546: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-480105 -- exec busybox-7b57f96db7-qthn2 -- nslookup kubernetes.default
multinode_test.go:554: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-480105 -- exec busybox-7b57f96db7-pvx5x -- nslookup kubernetes.default.svc.cluster.local
multinode_test.go:554: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-480105 -- exec busybox-7b57f96db7-qthn2 -- nslookup kubernetes.default.svc.cluster.local
--- PASS: TestMultiNode/serial/DeployApp2Nodes (5.51s)

                                                
                                    
x
+
TestMultiNode/serial/PingHostFrom2Pods (0.91s)

                                                
                                                
=== RUN   TestMultiNode/serial/PingHostFrom2Pods
multinode_test.go:564: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-480105 -- get pods -o jsonpath='{.items[*].metadata.name}'
multinode_test.go:572: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-480105 -- exec busybox-7b57f96db7-pvx5x -- sh -c "nslookup host.minikube.internal | awk 'NR==5' | cut -d' ' -f3"
multinode_test.go:583: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-480105 -- exec busybox-7b57f96db7-pvx5x -- sh -c "ping -c 1 192.168.39.1"
multinode_test.go:572: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-480105 -- exec busybox-7b57f96db7-qthn2 -- sh -c "nslookup host.minikube.internal | awk 'NR==5' | cut -d' ' -f3"
multinode_test.go:583: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-480105 -- exec busybox-7b57f96db7-qthn2 -- sh -c "ping -c 1 192.168.39.1"
--- PASS: TestMultiNode/serial/PingHostFrom2Pods (0.91s)

                                                
                                    
x
+
TestMultiNode/serial/AddNode (50.77s)

                                                
                                                
=== RUN   TestMultiNode/serial/AddNode
multinode_test.go:121: (dbg) Run:  out/minikube-linux-amd64 node add -p multinode-480105 -v=5 --alsologtostderr
E1018 12:00:58.915440    9909 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21647-6010/.minikube/profiles/functional-897621/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
multinode_test.go:121: (dbg) Done: out/minikube-linux-amd64 node add -p multinode-480105 -v=5 --alsologtostderr: (50.158087847s)
multinode_test.go:127: (dbg) Run:  out/minikube-linux-amd64 -p multinode-480105 status --alsologtostderr
--- PASS: TestMultiNode/serial/AddNode (50.77s)

                                                
                                    
x
+
TestMultiNode/serial/MultiNodeLabels (0.07s)

                                                
                                                
=== RUN   TestMultiNode/serial/MultiNodeLabels
multinode_test.go:221: (dbg) Run:  kubectl --context multinode-480105 get nodes -o "jsonpath=[{range .items[*]}{.metadata.labels},{end}]"
--- PASS: TestMultiNode/serial/MultiNodeLabels (0.07s)

                                                
                                    
x
+
TestMultiNode/serial/ProfileList (0.62s)

                                                
                                                
=== RUN   TestMultiNode/serial/ProfileList
multinode_test.go:143: (dbg) Run:  out/minikube-linux-amd64 profile list --output json
--- PASS: TestMultiNode/serial/ProfileList (0.62s)

                                                
                                    
x
+
TestMultiNode/serial/CopyFile (7.6s)

                                                
                                                
=== RUN   TestMultiNode/serial/CopyFile
multinode_test.go:184: (dbg) Run:  out/minikube-linux-amd64 -p multinode-480105 status --output json --alsologtostderr
helpers_test.go:573: (dbg) Run:  out/minikube-linux-amd64 -p multinode-480105 cp testdata/cp-test.txt multinode-480105:/home/docker/cp-test.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p multinode-480105 ssh -n multinode-480105 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-amd64 -p multinode-480105 cp multinode-480105:/home/docker/cp-test.txt /tmp/TestMultiNodeserialCopyFile1275141272/001/cp-test_multinode-480105.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p multinode-480105 ssh -n multinode-480105 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-amd64 -p multinode-480105 cp multinode-480105:/home/docker/cp-test.txt multinode-480105-m02:/home/docker/cp-test_multinode-480105_multinode-480105-m02.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p multinode-480105 ssh -n multinode-480105 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p multinode-480105 ssh -n multinode-480105-m02 "sudo cat /home/docker/cp-test_multinode-480105_multinode-480105-m02.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-amd64 -p multinode-480105 cp multinode-480105:/home/docker/cp-test.txt multinode-480105-m03:/home/docker/cp-test_multinode-480105_multinode-480105-m03.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p multinode-480105 ssh -n multinode-480105 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p multinode-480105 ssh -n multinode-480105-m03 "sudo cat /home/docker/cp-test_multinode-480105_multinode-480105-m03.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-amd64 -p multinode-480105 cp testdata/cp-test.txt multinode-480105-m02:/home/docker/cp-test.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p multinode-480105 ssh -n multinode-480105-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-amd64 -p multinode-480105 cp multinode-480105-m02:/home/docker/cp-test.txt /tmp/TestMultiNodeserialCopyFile1275141272/001/cp-test_multinode-480105-m02.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p multinode-480105 ssh -n multinode-480105-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-amd64 -p multinode-480105 cp multinode-480105-m02:/home/docker/cp-test.txt multinode-480105:/home/docker/cp-test_multinode-480105-m02_multinode-480105.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p multinode-480105 ssh -n multinode-480105-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p multinode-480105 ssh -n multinode-480105 "sudo cat /home/docker/cp-test_multinode-480105-m02_multinode-480105.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-amd64 -p multinode-480105 cp multinode-480105-m02:/home/docker/cp-test.txt multinode-480105-m03:/home/docker/cp-test_multinode-480105-m02_multinode-480105-m03.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p multinode-480105 ssh -n multinode-480105-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p multinode-480105 ssh -n multinode-480105-m03 "sudo cat /home/docker/cp-test_multinode-480105-m02_multinode-480105-m03.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-amd64 -p multinode-480105 cp testdata/cp-test.txt multinode-480105-m03:/home/docker/cp-test.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p multinode-480105 ssh -n multinode-480105-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-amd64 -p multinode-480105 cp multinode-480105-m03:/home/docker/cp-test.txt /tmp/TestMultiNodeserialCopyFile1275141272/001/cp-test_multinode-480105-m03.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p multinode-480105 ssh -n multinode-480105-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-amd64 -p multinode-480105 cp multinode-480105-m03:/home/docker/cp-test.txt multinode-480105:/home/docker/cp-test_multinode-480105-m03_multinode-480105.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p multinode-480105 ssh -n multinode-480105-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p multinode-480105 ssh -n multinode-480105 "sudo cat /home/docker/cp-test_multinode-480105-m03_multinode-480105.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-amd64 -p multinode-480105 cp multinode-480105-m03:/home/docker/cp-test.txt multinode-480105-m02:/home/docker/cp-test_multinode-480105-m03_multinode-480105-m02.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p multinode-480105 ssh -n multinode-480105-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p multinode-480105 ssh -n multinode-480105-m02 "sudo cat /home/docker/cp-test_multinode-480105-m03_multinode-480105-m02.txt"
--- PASS: TestMultiNode/serial/CopyFile (7.60s)

                                                
                                    
x
+
TestMultiNode/serial/StopNode (2.68s)

                                                
                                                
=== RUN   TestMultiNode/serial/StopNode
multinode_test.go:248: (dbg) Run:  out/minikube-linux-amd64 -p multinode-480105 node stop m03
multinode_test.go:248: (dbg) Done: out/minikube-linux-amd64 -p multinode-480105 node stop m03: (1.776402344s)
multinode_test.go:254: (dbg) Run:  out/minikube-linux-amd64 -p multinode-480105 status
multinode_test.go:254: (dbg) Non-zero exit: out/minikube-linux-amd64 -p multinode-480105 status: exit status 7 (453.240063ms)

                                                
                                                
-- stdout --
	multinode-480105
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Running
	kubeconfig: Configured
	
	multinode-480105-m02
	type: Worker
	host: Running
	kubelet: Running
	
	multinode-480105-m03
	type: Worker
	host: Stopped
	kubelet: Stopped
	

                                                
                                                
-- /stdout --
multinode_test.go:261: (dbg) Run:  out/minikube-linux-amd64 -p multinode-480105 status --alsologtostderr
multinode_test.go:261: (dbg) Non-zero exit: out/minikube-linux-amd64 -p multinode-480105 status --alsologtostderr: exit status 7 (453.992112ms)

                                                
                                                
-- stdout --
	multinode-480105
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Running
	kubeconfig: Configured
	
	multinode-480105-m02
	type: Worker
	host: Running
	kubelet: Running
	
	multinode-480105-m03
	type: Worker
	host: Stopped
	kubelet: Stopped
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1018 12:01:40.837820   33963 out.go:360] Setting OutFile to fd 1 ...
	I1018 12:01:40.838084   33963 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1018 12:01:40.838094   33963 out.go:374] Setting ErrFile to fd 2...
	I1018 12:01:40.838099   33963 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1018 12:01:40.838362   33963 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21647-6010/.minikube/bin
	I1018 12:01:40.838547   33963 out.go:368] Setting JSON to false
	I1018 12:01:40.838575   33963 mustload.go:65] Loading cluster: multinode-480105
	I1018 12:01:40.838727   33963 notify.go:220] Checking for updates...
	I1018 12:01:40.839007   33963 config.go:182] Loaded profile config "multinode-480105": Driver=kvm2, ContainerRuntime=docker, KubernetesVersion=v1.34.1
	I1018 12:01:40.839024   33963 status.go:174] checking status of multinode-480105 ...
	I1018 12:01:40.839700   33963 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_integration/out/docker-machine-driver-kvm2
	I1018 12:01:40.839759   33963 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1018 12:01:40.862604   33963 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:36311
	I1018 12:01:40.863239   33963 main.go:141] libmachine: () Calling .GetVersion
	I1018 12:01:40.863910   33963 main.go:141] libmachine: Using API Version  1
	I1018 12:01:40.863943   33963 main.go:141] libmachine: () Calling .SetConfigRaw
	I1018 12:01:40.864597   33963 main.go:141] libmachine: () Calling .GetMachineName
	I1018 12:01:40.864879   33963 main.go:141] libmachine: (multinode-480105) Calling .GetState
	I1018 12:01:40.867600   33963 status.go:371] multinode-480105 host status = "Running" (err=<nil>)
	I1018 12:01:40.867624   33963 host.go:66] Checking if "multinode-480105" exists ...
	I1018 12:01:40.868126   33963 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_integration/out/docker-machine-driver-kvm2
	I1018 12:01:40.868186   33963 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1018 12:01:40.882650   33963 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:41145
	I1018 12:01:40.883236   33963 main.go:141] libmachine: () Calling .GetVersion
	I1018 12:01:40.883718   33963 main.go:141] libmachine: Using API Version  1
	I1018 12:01:40.883741   33963 main.go:141] libmachine: () Calling .SetConfigRaw
	I1018 12:01:40.884037   33963 main.go:141] libmachine: () Calling .GetMachineName
	I1018 12:01:40.884282   33963 main.go:141] libmachine: (multinode-480105) Calling .GetIP
	I1018 12:01:40.888099   33963 main.go:141] libmachine: (multinode-480105) DBG | domain multinode-480105 has defined MAC address 52:54:00:b6:19:fe in network mk-multinode-480105
	I1018 12:01:40.888742   33963 main.go:141] libmachine: (multinode-480105) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:b6:19:fe", ip: ""} in network mk-multinode-480105: {Iface:virbr1 ExpiryTime:2025-10-18 12:58:51 +0000 UTC Type:0 Mac:52:54:00:b6:19:fe Iaid: IPaddr:192.168.39.223 Prefix:24 Hostname:multinode-480105 Clientid:01:52:54:00:b6:19:fe}
	I1018 12:01:40.888773   33963 main.go:141] libmachine: (multinode-480105) DBG | domain multinode-480105 has defined IP address 192.168.39.223 and MAC address 52:54:00:b6:19:fe in network mk-multinode-480105
	I1018 12:01:40.889010   33963 host.go:66] Checking if "multinode-480105" exists ...
	I1018 12:01:40.889411   33963 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_integration/out/docker-machine-driver-kvm2
	I1018 12:01:40.889468   33963 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1018 12:01:40.903855   33963 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:32877
	I1018 12:01:40.904355   33963 main.go:141] libmachine: () Calling .GetVersion
	I1018 12:01:40.904805   33963 main.go:141] libmachine: Using API Version  1
	I1018 12:01:40.904832   33963 main.go:141] libmachine: () Calling .SetConfigRaw
	I1018 12:01:40.905214   33963 main.go:141] libmachine: () Calling .GetMachineName
	I1018 12:01:40.905440   33963 main.go:141] libmachine: (multinode-480105) Calling .DriverName
	I1018 12:01:40.905735   33963 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1018 12:01:40.905874   33963 main.go:141] libmachine: (multinode-480105) Calling .GetSSHHostname
	I1018 12:01:40.909312   33963 main.go:141] libmachine: (multinode-480105) DBG | domain multinode-480105 has defined MAC address 52:54:00:b6:19:fe in network mk-multinode-480105
	I1018 12:01:40.909806   33963 main.go:141] libmachine: (multinode-480105) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:b6:19:fe", ip: ""} in network mk-multinode-480105: {Iface:virbr1 ExpiryTime:2025-10-18 12:58:51 +0000 UTC Type:0 Mac:52:54:00:b6:19:fe Iaid: IPaddr:192.168.39.223 Prefix:24 Hostname:multinode-480105 Clientid:01:52:54:00:b6:19:fe}
	I1018 12:01:40.909852   33963 main.go:141] libmachine: (multinode-480105) DBG | domain multinode-480105 has defined IP address 192.168.39.223 and MAC address 52:54:00:b6:19:fe in network mk-multinode-480105
	I1018 12:01:40.910025   33963 main.go:141] libmachine: (multinode-480105) Calling .GetSSHPort
	I1018 12:01:40.910227   33963 main.go:141] libmachine: (multinode-480105) Calling .GetSSHKeyPath
	I1018 12:01:40.910432   33963 main.go:141] libmachine: (multinode-480105) Calling .GetSSHUsername
	I1018 12:01:40.910572   33963 sshutil.go:53] new ssh client: &{IP:192.168.39.223 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/21647-6010/.minikube/machines/multinode-480105/id_rsa Username:docker}
	I1018 12:01:40.992717   33963 ssh_runner.go:195] Run: systemctl --version
	I1018 12:01:40.999457   33963 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1018 12:01:41.018209   33963 kubeconfig.go:125] found "multinode-480105" server: "https://192.168.39.223:8443"
	I1018 12:01:41.018248   33963 api_server.go:166] Checking apiserver status ...
	I1018 12:01:41.018312   33963 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1018 12:01:41.040541   33963 ssh_runner.go:195] Run: sudo egrep ^[0-9]+:freezer: /proc/2402/cgroup
	W1018 12:01:41.052490   33963 api_server.go:177] unable to find freezer cgroup: sudo egrep ^[0-9]+:freezer: /proc/2402/cgroup: Process exited with status 1
	stdout:
	
	stderr:
	I1018 12:01:41.052550   33963 ssh_runner.go:195] Run: ls
	I1018 12:01:41.057894   33963 api_server.go:253] Checking apiserver healthz at https://192.168.39.223:8443/healthz ...
	I1018 12:01:41.063006   33963 api_server.go:279] https://192.168.39.223:8443/healthz returned 200:
	ok
	I1018 12:01:41.063029   33963 status.go:463] multinode-480105 apiserver status = Running (err=<nil>)
	I1018 12:01:41.063038   33963 status.go:176] multinode-480105 status: &{Name:multinode-480105 Host:Running Kubelet:Running APIServer:Running Kubeconfig:Configured Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I1018 12:01:41.063058   33963 status.go:174] checking status of multinode-480105-m02 ...
	I1018 12:01:41.063386   33963 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_integration/out/docker-machine-driver-kvm2
	I1018 12:01:41.063422   33963 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1018 12:01:41.077763   33963 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:46101
	I1018 12:01:41.078191   33963 main.go:141] libmachine: () Calling .GetVersion
	I1018 12:01:41.078660   33963 main.go:141] libmachine: Using API Version  1
	I1018 12:01:41.078681   33963 main.go:141] libmachine: () Calling .SetConfigRaw
	I1018 12:01:41.079025   33963 main.go:141] libmachine: () Calling .GetMachineName
	I1018 12:01:41.079299   33963 main.go:141] libmachine: (multinode-480105-m02) Calling .GetState
	I1018 12:01:41.081131   33963 status.go:371] multinode-480105-m02 host status = "Running" (err=<nil>)
	I1018 12:01:41.081148   33963 host.go:66] Checking if "multinode-480105-m02" exists ...
	I1018 12:01:41.081658   33963 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_integration/out/docker-machine-driver-kvm2
	I1018 12:01:41.081704   33963 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1018 12:01:41.095926   33963 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:40485
	I1018 12:01:41.096410   33963 main.go:141] libmachine: () Calling .GetVersion
	I1018 12:01:41.096839   33963 main.go:141] libmachine: Using API Version  1
	I1018 12:01:41.096862   33963 main.go:141] libmachine: () Calling .SetConfigRaw
	I1018 12:01:41.097323   33963 main.go:141] libmachine: () Calling .GetMachineName
	I1018 12:01:41.097562   33963 main.go:141] libmachine: (multinode-480105-m02) Calling .GetIP
	I1018 12:01:41.100710   33963 main.go:141] libmachine: (multinode-480105-m02) DBG | domain multinode-480105-m02 has defined MAC address 52:54:00:db:36:a8 in network mk-multinode-480105
	I1018 12:01:41.101222   33963 main.go:141] libmachine: (multinode-480105-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:db:36:a8", ip: ""} in network mk-multinode-480105: {Iface:virbr1 ExpiryTime:2025-10-18 12:59:56 +0000 UTC Type:0 Mac:52:54:00:db:36:a8 Iaid: IPaddr:192.168.39.97 Prefix:24 Hostname:multinode-480105-m02 Clientid:01:52:54:00:db:36:a8}
	I1018 12:01:41.101243   33963 main.go:141] libmachine: (multinode-480105-m02) DBG | domain multinode-480105-m02 has defined IP address 192.168.39.97 and MAC address 52:54:00:db:36:a8 in network mk-multinode-480105
	I1018 12:01:41.101476   33963 host.go:66] Checking if "multinode-480105-m02" exists ...
	I1018 12:01:41.101781   33963 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_integration/out/docker-machine-driver-kvm2
	I1018 12:01:41.101816   33963 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1018 12:01:41.115467   33963 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:46753
	I1018 12:01:41.115978   33963 main.go:141] libmachine: () Calling .GetVersion
	I1018 12:01:41.116467   33963 main.go:141] libmachine: Using API Version  1
	I1018 12:01:41.116482   33963 main.go:141] libmachine: () Calling .SetConfigRaw
	I1018 12:01:41.116850   33963 main.go:141] libmachine: () Calling .GetMachineName
	I1018 12:01:41.117103   33963 main.go:141] libmachine: (multinode-480105-m02) Calling .DriverName
	I1018 12:01:41.117375   33963 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1018 12:01:41.117401   33963 main.go:141] libmachine: (multinode-480105-m02) Calling .GetSSHHostname
	I1018 12:01:41.120803   33963 main.go:141] libmachine: (multinode-480105-m02) DBG | domain multinode-480105-m02 has defined MAC address 52:54:00:db:36:a8 in network mk-multinode-480105
	I1018 12:01:41.121369   33963 main.go:141] libmachine: (multinode-480105-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:db:36:a8", ip: ""} in network mk-multinode-480105: {Iface:virbr1 ExpiryTime:2025-10-18 12:59:56 +0000 UTC Type:0 Mac:52:54:00:db:36:a8 Iaid: IPaddr:192.168.39.97 Prefix:24 Hostname:multinode-480105-m02 Clientid:01:52:54:00:db:36:a8}
	I1018 12:01:41.121399   33963 main.go:141] libmachine: (multinode-480105-m02) DBG | domain multinode-480105-m02 has defined IP address 192.168.39.97 and MAC address 52:54:00:db:36:a8 in network mk-multinode-480105
	I1018 12:01:41.121579   33963 main.go:141] libmachine: (multinode-480105-m02) Calling .GetSSHPort
	I1018 12:01:41.121775   33963 main.go:141] libmachine: (multinode-480105-m02) Calling .GetSSHKeyPath
	I1018 12:01:41.121973   33963 main.go:141] libmachine: (multinode-480105-m02) Calling .GetSSHUsername
	I1018 12:01:41.122207   33963 sshutil.go:53] new ssh client: &{IP:192.168.39.97 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/21647-6010/.minikube/machines/multinode-480105-m02/id_rsa Username:docker}
	I1018 12:01:41.205456   33963 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1018 12:01:41.224191   33963 status.go:176] multinode-480105-m02 status: &{Name:multinode-480105-m02 Host:Running Kubelet:Running APIServer:Irrelevant Kubeconfig:Irrelevant Worker:true TimeToStop: DockerEnv: PodManEnv:}
	I1018 12:01:41.224235   33963 status.go:174] checking status of multinode-480105-m03 ...
	I1018 12:01:41.224582   33963 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_integration/out/docker-machine-driver-kvm2
	I1018 12:01:41.224635   33963 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1018 12:01:41.238853   33963 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:38977
	I1018 12:01:41.239360   33963 main.go:141] libmachine: () Calling .GetVersion
	I1018 12:01:41.239802   33963 main.go:141] libmachine: Using API Version  1
	I1018 12:01:41.239821   33963 main.go:141] libmachine: () Calling .SetConfigRaw
	I1018 12:01:41.240219   33963 main.go:141] libmachine: () Calling .GetMachineName
	I1018 12:01:41.240450   33963 main.go:141] libmachine: (multinode-480105-m03) Calling .GetState
	I1018 12:01:41.242531   33963 status.go:371] multinode-480105-m03 host status = "Stopped" (err=<nil>)
	I1018 12:01:41.242548   33963 status.go:384] host is not running, skipping remaining checks
	I1018 12:01:41.242555   33963 status.go:176] multinode-480105-m03 status: &{Name:multinode-480105-m03 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:true TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
--- PASS: TestMultiNode/serial/StopNode (2.68s)

                                                
                                    
x
+
TestMultiNode/serial/StartAfterStop (39.77s)

                                                
                                                
=== RUN   TestMultiNode/serial/StartAfterStop
multinode_test.go:282: (dbg) Run:  out/minikube-linux-amd64 -p multinode-480105 node start m03 -v=5 --alsologtostderr
multinode_test.go:282: (dbg) Done: out/minikube-linux-amd64 -p multinode-480105 node start m03 -v=5 --alsologtostderr: (39.08153848s)
multinode_test.go:290: (dbg) Run:  out/minikube-linux-amd64 -p multinode-480105 status -v=5 --alsologtostderr
multinode_test.go:306: (dbg) Run:  kubectl get nodes
--- PASS: TestMultiNode/serial/StartAfterStop (39.77s)

                                                
                                    
x
+
TestMultiNode/serial/RestartKeepsNodes (179.69s)

                                                
                                                
=== RUN   TestMultiNode/serial/RestartKeepsNodes
multinode_test.go:314: (dbg) Run:  out/minikube-linux-amd64 node list -p multinode-480105
multinode_test.go:321: (dbg) Run:  out/minikube-linux-amd64 stop -p multinode-480105
E1018 12:02:45.861334    9909 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21647-6010/.minikube/profiles/addons-886198/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
multinode_test.go:321: (dbg) Done: out/minikube-linux-amd64 stop -p multinode-480105: (29.736916865s)
multinode_test.go:326: (dbg) Run:  out/minikube-linux-amd64 start -p multinode-480105 --wait=true -v=5 --alsologtostderr
E1018 12:04:35.846610    9909 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21647-6010/.minikube/profiles/functional-897621/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
multinode_test.go:326: (dbg) Done: out/minikube-linux-amd64 start -p multinode-480105 --wait=true -v=5 --alsologtostderr: (2m29.846410557s)
multinode_test.go:331: (dbg) Run:  out/minikube-linux-amd64 node list -p multinode-480105
--- PASS: TestMultiNode/serial/RestartKeepsNodes (179.69s)

                                                
                                    
x
+
TestMultiNode/serial/DeleteNode (2.44s)

                                                
                                                
=== RUN   TestMultiNode/serial/DeleteNode
multinode_test.go:416: (dbg) Run:  out/minikube-linux-amd64 -p multinode-480105 node delete m03
multinode_test.go:416: (dbg) Done: out/minikube-linux-amd64 -p multinode-480105 node delete m03: (1.861474594s)
multinode_test.go:422: (dbg) Run:  out/minikube-linux-amd64 -p multinode-480105 status --alsologtostderr
multinode_test.go:436: (dbg) Run:  kubectl get nodes
multinode_test.go:444: (dbg) Run:  kubectl get nodes -o "go-template='{{range .items}}{{range .status.conditions}}{{if eq .type "Ready"}} {{.status}}{{"\n"}}{{end}}{{end}}{{end}}'"
--- PASS: TestMultiNode/serial/DeleteNode (2.44s)

                                                
                                    
x
+
TestMultiNode/serial/StopMultiNode (26.46s)

                                                
                                                
=== RUN   TestMultiNode/serial/StopMultiNode
multinode_test.go:345: (dbg) Run:  out/minikube-linux-amd64 -p multinode-480105 stop
multinode_test.go:345: (dbg) Done: out/minikube-linux-amd64 -p multinode-480105 stop: (26.289050468s)
multinode_test.go:351: (dbg) Run:  out/minikube-linux-amd64 -p multinode-480105 status
multinode_test.go:351: (dbg) Non-zero exit: out/minikube-linux-amd64 -p multinode-480105 status: exit status 7 (83.136099ms)

                                                
                                                
-- stdout --
	multinode-480105
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	
	multinode-480105-m02
	type: Worker
	host: Stopped
	kubelet: Stopped
	

                                                
                                                
-- /stdout --
multinode_test.go:358: (dbg) Run:  out/minikube-linux-amd64 -p multinode-480105 status --alsologtostderr
multinode_test.go:358: (dbg) Non-zero exit: out/minikube-linux-amd64 -p multinode-480105 status --alsologtostderr: exit status 7 (84.070666ms)

                                                
                                                
-- stdout --
	multinode-480105
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	
	multinode-480105-m02
	type: Worker
	host: Stopped
	kubelet: Stopped
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1018 12:05:49.551495   35794 out.go:360] Setting OutFile to fd 1 ...
	I1018 12:05:49.551781   35794 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1018 12:05:49.551792   35794 out.go:374] Setting ErrFile to fd 2...
	I1018 12:05:49.551797   35794 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1018 12:05:49.552001   35794 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21647-6010/.minikube/bin
	I1018 12:05:49.552188   35794 out.go:368] Setting JSON to false
	I1018 12:05:49.552217   35794 mustload.go:65] Loading cluster: multinode-480105
	I1018 12:05:49.552309   35794 notify.go:220] Checking for updates...
	I1018 12:05:49.552773   35794 config.go:182] Loaded profile config "multinode-480105": Driver=kvm2, ContainerRuntime=docker, KubernetesVersion=v1.34.1
	I1018 12:05:49.552793   35794 status.go:174] checking status of multinode-480105 ...
	I1018 12:05:49.553339   35794 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_integration/out/docker-machine-driver-kvm2
	I1018 12:05:49.553388   35794 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1018 12:05:49.566859   35794 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:44623
	I1018 12:05:49.567362   35794 main.go:141] libmachine: () Calling .GetVersion
	I1018 12:05:49.567860   35794 main.go:141] libmachine: Using API Version  1
	I1018 12:05:49.567883   35794 main.go:141] libmachine: () Calling .SetConfigRaw
	I1018 12:05:49.568326   35794 main.go:141] libmachine: () Calling .GetMachineName
	I1018 12:05:49.568508   35794 main.go:141] libmachine: (multinode-480105) Calling .GetState
	I1018 12:05:49.570223   35794 status.go:371] multinode-480105 host status = "Stopped" (err=<nil>)
	I1018 12:05:49.570242   35794 status.go:384] host is not running, skipping remaining checks
	I1018 12:05:49.570248   35794 status.go:176] multinode-480105 status: &{Name:multinode-480105 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I1018 12:05:49.570277   35794 status.go:174] checking status of multinode-480105-m02 ...
	I1018 12:05:49.570621   35794 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_integration/out/docker-machine-driver-kvm2
	I1018 12:05:49.570667   35794 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1018 12:05:49.585740   35794 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:37229
	I1018 12:05:49.586131   35794 main.go:141] libmachine: () Calling .GetVersion
	I1018 12:05:49.586580   35794 main.go:141] libmachine: Using API Version  1
	I1018 12:05:49.586615   35794 main.go:141] libmachine: () Calling .SetConfigRaw
	I1018 12:05:49.586925   35794 main.go:141] libmachine: () Calling .GetMachineName
	I1018 12:05:49.587109   35794 main.go:141] libmachine: (multinode-480105-m02) Calling .GetState
	I1018 12:05:49.589159   35794 status.go:371] multinode-480105-m02 host status = "Stopped" (err=<nil>)
	I1018 12:05:49.589174   35794 status.go:384] host is not running, skipping remaining checks
	I1018 12:05:49.589179   35794 status.go:176] multinode-480105-m02 status: &{Name:multinode-480105-m02 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:true TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
--- PASS: TestMultiNode/serial/StopMultiNode (26.46s)

                                                
                                    
x
+
TestMultiNode/serial/RestartMultiNode (99.65s)

                                                
                                                
=== RUN   TestMultiNode/serial/RestartMultiNode
multinode_test.go:376: (dbg) Run:  out/minikube-linux-amd64 start -p multinode-480105 --wait=true -v=5 --alsologtostderr --driver=kvm2  --auto-update-drivers=false
multinode_test.go:376: (dbg) Done: out/minikube-linux-amd64 start -p multinode-480105 --wait=true -v=5 --alsologtostderr --driver=kvm2  --auto-update-drivers=false: (1m39.078329007s)
multinode_test.go:382: (dbg) Run:  out/minikube-linux-amd64 -p multinode-480105 status --alsologtostderr
multinode_test.go:396: (dbg) Run:  kubectl get nodes
multinode_test.go:404: (dbg) Run:  kubectl get nodes -o "go-template='{{range .items}}{{range .status.conditions}}{{if eq .type "Ready"}} {{.status}}{{"\n"}}{{end}}{{end}}{{end}}'"
--- PASS: TestMultiNode/serial/RestartMultiNode (99.65s)

                                                
                                    
x
+
TestMultiNode/serial/ValidateNameConflict (43.51s)

                                                
                                                
=== RUN   TestMultiNode/serial/ValidateNameConflict
multinode_test.go:455: (dbg) Run:  out/minikube-linux-amd64 node list -p multinode-480105
multinode_test.go:464: (dbg) Run:  out/minikube-linux-amd64 start -p multinode-480105-m02 --driver=kvm2  --auto-update-drivers=false
multinode_test.go:464: (dbg) Non-zero exit: out/minikube-linux-amd64 start -p multinode-480105-m02 --driver=kvm2  --auto-update-drivers=false: exit status 14 (71.322122ms)

                                                
                                                
-- stdout --
	* [multinode-480105-m02] minikube v1.37.0 on Ubuntu 22.04 (kvm/amd64)
	  - MINIKUBE_LOCATION=21647
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/21647-6010/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/21647-6010/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-amd64
	  - MINIKUBE_FORCE_SYSTEMD=
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	! Profile name 'multinode-480105-m02' is duplicated with machine name 'multinode-480105-m02' in profile 'multinode-480105'
	X Exiting due to MK_USAGE: Profile name should be unique

                                                
                                                
** /stderr **
multinode_test.go:472: (dbg) Run:  out/minikube-linux-amd64 start -p multinode-480105-m03 --driver=kvm2  --auto-update-drivers=false
E1018 12:07:45.860647    9909 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21647-6010/.minikube/profiles/addons-886198/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
multinode_test.go:472: (dbg) Done: out/minikube-linux-amd64 start -p multinode-480105-m03 --driver=kvm2  --auto-update-drivers=false: (42.30961145s)
multinode_test.go:479: (dbg) Run:  out/minikube-linux-amd64 node add -p multinode-480105
multinode_test.go:479: (dbg) Non-zero exit: out/minikube-linux-amd64 node add -p multinode-480105: exit status 80 (233.078035ms)

                                                
                                                
-- stdout --
	* Adding node m03 to cluster multinode-480105 as [worker]
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to GUEST_NODE_ADD: failed to add node: Node multinode-480105-m03 already exists in multinode-480105-m03 profile
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_node_040ea7097fd6ed71e65be9a474587f81f0ccd21d_0.log                    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
multinode_test.go:484: (dbg) Run:  out/minikube-linux-amd64 delete -p multinode-480105-m03
--- PASS: TestMultiNode/serial/ValidateNameConflict (43.51s)

                                                
                                    
x
+
TestPreload (163.48s)

                                                
                                                
=== RUN   TestPreload
preload_test.go:43: (dbg) Run:  out/minikube-linux-amd64 start -p test-preload-837081 --memory=3072 --alsologtostderr --wait=true --preload=false --driver=kvm2  --auto-update-drivers=false --kubernetes-version=v1.32.0
E1018 12:09:35.846100    9909 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21647-6010/.minikube/profiles/functional-897621/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
preload_test.go:43: (dbg) Done: out/minikube-linux-amd64 start -p test-preload-837081 --memory=3072 --alsologtostderr --wait=true --preload=false --driver=kvm2  --auto-update-drivers=false --kubernetes-version=v1.32.0: (1m33.379594888s)
preload_test.go:51: (dbg) Run:  out/minikube-linux-amd64 -p test-preload-837081 image pull gcr.io/k8s-minikube/busybox
preload_test.go:51: (dbg) Done: out/minikube-linux-amd64 -p test-preload-837081 image pull gcr.io/k8s-minikube/busybox: (2.376010241s)
preload_test.go:57: (dbg) Run:  out/minikube-linux-amd64 stop -p test-preload-837081
preload_test.go:57: (dbg) Done: out/minikube-linux-amd64 stop -p test-preload-837081: (13.539421215s)
preload_test.go:65: (dbg) Run:  out/minikube-linux-amd64 start -p test-preload-837081 --memory=3072 --alsologtostderr -v=1 --wait=true --driver=kvm2  --auto-update-drivers=false
E1018 12:10:48.942481    9909 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21647-6010/.minikube/profiles/addons-886198/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
preload_test.go:65: (dbg) Done: out/minikube-linux-amd64 start -p test-preload-837081 --memory=3072 --alsologtostderr -v=1 --wait=true --driver=kvm2  --auto-update-drivers=false: (53.098952735s)
preload_test.go:70: (dbg) Run:  out/minikube-linux-amd64 -p test-preload-837081 image list
helpers_test.go:175: Cleaning up "test-preload-837081" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p test-preload-837081
--- PASS: TestPreload (163.48s)

                                                
                                    
x
+
TestScheduledStopUnix (116.71s)

                                                
                                                
=== RUN   TestScheduledStopUnix
scheduled_stop_test.go:128: (dbg) Run:  out/minikube-linux-amd64 start -p scheduled-stop-847007 --memory=3072 --driver=kvm2  --auto-update-drivers=false
scheduled_stop_test.go:128: (dbg) Done: out/minikube-linux-amd64 start -p scheduled-stop-847007 --memory=3072 --driver=kvm2  --auto-update-drivers=false: (44.923582824s)
scheduled_stop_test.go:137: (dbg) Run:  out/minikube-linux-amd64 stop -p scheduled-stop-847007 --schedule 5m
scheduled_stop_test.go:191: (dbg) Run:  out/minikube-linux-amd64 status --format={{.TimeToStop}} -p scheduled-stop-847007 -n scheduled-stop-847007
scheduled_stop_test.go:169: signal error was:  <nil>
scheduled_stop_test.go:137: (dbg) Run:  out/minikube-linux-amd64 stop -p scheduled-stop-847007 --schedule 15s
scheduled_stop_test.go:169: signal error was:  os: process already finished
I1018 12:11:43.125752    9909 retry.go:31] will retry after 79.542µs: open /home/jenkins/minikube-integration/21647-6010/.minikube/profiles/scheduled-stop-847007/pid: no such file or directory
I1018 12:11:43.126936    9909 retry.go:31] will retry after 197.24µs: open /home/jenkins/minikube-integration/21647-6010/.minikube/profiles/scheduled-stop-847007/pid: no such file or directory
I1018 12:11:43.128111    9909 retry.go:31] will retry after 120.705µs: open /home/jenkins/minikube-integration/21647-6010/.minikube/profiles/scheduled-stop-847007/pid: no such file or directory
I1018 12:11:43.129271    9909 retry.go:31] will retry after 412.428µs: open /home/jenkins/minikube-integration/21647-6010/.minikube/profiles/scheduled-stop-847007/pid: no such file or directory
I1018 12:11:43.130383    9909 retry.go:31] will retry after 276.665µs: open /home/jenkins/minikube-integration/21647-6010/.minikube/profiles/scheduled-stop-847007/pid: no such file or directory
I1018 12:11:43.131545    9909 retry.go:31] will retry after 1.016303ms: open /home/jenkins/minikube-integration/21647-6010/.minikube/profiles/scheduled-stop-847007/pid: no such file or directory
I1018 12:11:43.132690    9909 retry.go:31] will retry after 569.736µs: open /home/jenkins/minikube-integration/21647-6010/.minikube/profiles/scheduled-stop-847007/pid: no such file or directory
I1018 12:11:43.133852    9909 retry.go:31] will retry after 1.408386ms: open /home/jenkins/minikube-integration/21647-6010/.minikube/profiles/scheduled-stop-847007/pid: no such file or directory
I1018 12:11:43.136116    9909 retry.go:31] will retry after 3.301756ms: open /home/jenkins/minikube-integration/21647-6010/.minikube/profiles/scheduled-stop-847007/pid: no such file or directory
I1018 12:11:43.140385    9909 retry.go:31] will retry after 3.822953ms: open /home/jenkins/minikube-integration/21647-6010/.minikube/profiles/scheduled-stop-847007/pid: no such file or directory
I1018 12:11:43.144677    9909 retry.go:31] will retry after 3.816899ms: open /home/jenkins/minikube-integration/21647-6010/.minikube/profiles/scheduled-stop-847007/pid: no such file or directory
I1018 12:11:43.148964    9909 retry.go:31] will retry after 6.480973ms: open /home/jenkins/minikube-integration/21647-6010/.minikube/profiles/scheduled-stop-847007/pid: no such file or directory
I1018 12:11:43.156267    9909 retry.go:31] will retry after 9.316225ms: open /home/jenkins/minikube-integration/21647-6010/.minikube/profiles/scheduled-stop-847007/pid: no such file or directory
I1018 12:11:43.166581    9909 retry.go:31] will retry after 24.283122ms: open /home/jenkins/minikube-integration/21647-6010/.minikube/profiles/scheduled-stop-847007/pid: no such file or directory
I1018 12:11:43.191913    9909 retry.go:31] will retry after 25.963189ms: open /home/jenkins/minikube-integration/21647-6010/.minikube/profiles/scheduled-stop-847007/pid: no such file or directory
scheduled_stop_test.go:137: (dbg) Run:  out/minikube-linux-amd64 stop -p scheduled-stop-847007 --cancel-scheduled
scheduled_stop_test.go:176: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p scheduled-stop-847007 -n scheduled-stop-847007
scheduled_stop_test.go:205: (dbg) Run:  out/minikube-linux-amd64 status -p scheduled-stop-847007
scheduled_stop_test.go:137: (dbg) Run:  out/minikube-linux-amd64 stop -p scheduled-stop-847007 --schedule 15s
scheduled_stop_test.go:169: signal error was:  os: process already finished
E1018 12:12:45.860582    9909 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21647-6010/.minikube/profiles/addons-886198/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
scheduled_stop_test.go:205: (dbg) Run:  out/minikube-linux-amd64 status -p scheduled-stop-847007
scheduled_stop_test.go:205: (dbg) Non-zero exit: out/minikube-linux-amd64 status -p scheduled-stop-847007: exit status 7 (72.465808ms)

                                                
                                                
-- stdout --
	scheduled-stop-847007
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	

                                                
                                                
-- /stdout --
scheduled_stop_test.go:176: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p scheduled-stop-847007 -n scheduled-stop-847007
scheduled_stop_test.go:176: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p scheduled-stop-847007 -n scheduled-stop-847007: exit status 7 (78.358752ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
scheduled_stop_test.go:176: status error: exit status 7 (may be ok)
helpers_test.go:175: Cleaning up "scheduled-stop-847007" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p scheduled-stop-847007
--- PASS: TestScheduledStopUnix (116.71s)

                                                
                                    
x
+
TestSkaffold (127s)

                                                
                                                
=== RUN   TestSkaffold
skaffold_test.go:59: (dbg) Run:  /tmp/skaffold.exe1735892565 version
skaffold_test.go:63: skaffold version: v2.16.1
skaffold_test.go:66: (dbg) Run:  out/minikube-linux-amd64 start -p skaffold-681264 --memory=3072 --driver=kvm2  --auto-update-drivers=false
skaffold_test.go:66: (dbg) Done: out/minikube-linux-amd64 start -p skaffold-681264 --memory=3072 --driver=kvm2  --auto-update-drivers=false: (43.202540187s)
skaffold_test.go:86: copying out/minikube-linux-amd64 to /home/jenkins/workspace/KVM_Linux_integration/out/minikube
skaffold_test.go:105: (dbg) Run:  /tmp/skaffold.exe1735892565 run --minikube-profile skaffold-681264 --kube-context skaffold-681264 --status-check=true --port-forward=false --interactive=false
E1018 12:14:35.851510    9909 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21647-6010/.minikube/profiles/functional-897621/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
skaffold_test.go:105: (dbg) Done: /tmp/skaffold.exe1735892565 run --minikube-profile skaffold-681264 --kube-context skaffold-681264 --status-check=true --port-forward=false --interactive=false: (1m10.749777103s)
skaffold_test.go:111: (dbg) TestSkaffold: waiting 1m0s for pods matching "app=leeroy-app" in namespace "default" ...
helpers_test.go:352: "leeroy-app-b4745788d-6dstw" [0251bf02-5b34-4249-b2ba-53529b104a51] Running
skaffold_test.go:111: (dbg) TestSkaffold: app=leeroy-app healthy within 6.004036485s
skaffold_test.go:114: (dbg) TestSkaffold: waiting 1m0s for pods matching "app=leeroy-web" in namespace "default" ...
helpers_test.go:352: "leeroy-web-6866c4bf96-7xqgr" [27cc1bd9-9209-4e02-8830-1e63d2cb90ef] Running
skaffold_test.go:114: (dbg) TestSkaffold: app=leeroy-web healthy within 5.003482504s
helpers_test.go:175: Cleaning up "skaffold-681264" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p skaffold-681264
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p skaffold-681264: (1.0219411s)
--- PASS: TestSkaffold (127.00s)

                                                
                                    
x
+
TestRunningBinaryUpgrade (77.55s)

                                                
                                                
=== RUN   TestRunningBinaryUpgrade
=== PAUSE TestRunningBinaryUpgrade

                                                
                                                

                                                
                                                
=== CONT  TestRunningBinaryUpgrade
version_upgrade_test.go:120: (dbg) Run:  /tmp/minikube-v1.32.0.2499010478 start -p running-upgrade-450845 --memory=3072 --vm-driver=kvm2  --auto-update-drivers=false
version_upgrade_test.go:120: (dbg) Done: /tmp/minikube-v1.32.0.2499010478 start -p running-upgrade-450845 --memory=3072 --vm-driver=kvm2  --auto-update-drivers=false: (47.922991548s)
version_upgrade_test.go:130: (dbg) Run:  out/minikube-linux-amd64 start -p running-upgrade-450845 --memory=3072 --alsologtostderr -v=1 --driver=kvm2  --auto-update-drivers=false
version_upgrade_test.go:130: (dbg) Done: out/minikube-linux-amd64 start -p running-upgrade-450845 --memory=3072 --alsologtostderr -v=1 --driver=kvm2  --auto-update-drivers=false: (28.150506969s)
helpers_test.go:175: Cleaning up "running-upgrade-450845" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p running-upgrade-450845
--- PASS: TestRunningBinaryUpgrade (77.55s)

                                                
                                    
x
+
TestKubernetesUpgrade (203.16s)

                                                
                                                
=== RUN   TestKubernetesUpgrade
=== PAUSE TestKubernetesUpgrade

                                                
                                                

                                                
                                                
=== CONT  TestKubernetesUpgrade
version_upgrade_test.go:222: (dbg) Run:  out/minikube-linux-amd64 start -p kubernetes-upgrade-009521 --memory=3072 --kubernetes-version=v1.28.0 --alsologtostderr -v=1 --driver=kvm2  --auto-update-drivers=false
version_upgrade_test.go:222: (dbg) Done: out/minikube-linux-amd64 start -p kubernetes-upgrade-009521 --memory=3072 --kubernetes-version=v1.28.0 --alsologtostderr -v=1 --driver=kvm2  --auto-update-drivers=false: (46.109409848s)
version_upgrade_test.go:227: (dbg) Run:  out/minikube-linux-amd64 stop -p kubernetes-upgrade-009521
version_upgrade_test.go:227: (dbg) Done: out/minikube-linux-amd64 stop -p kubernetes-upgrade-009521: (13.594710332s)
version_upgrade_test.go:232: (dbg) Run:  out/minikube-linux-amd64 -p kubernetes-upgrade-009521 status --format={{.Host}}
version_upgrade_test.go:232: (dbg) Non-zero exit: out/minikube-linux-amd64 -p kubernetes-upgrade-009521 status --format={{.Host}}: exit status 7 (88.173734ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
version_upgrade_test.go:234: status error: exit status 7 (may be ok)
version_upgrade_test.go:243: (dbg) Run:  out/minikube-linux-amd64 start -p kubernetes-upgrade-009521 --memory=3072 --kubernetes-version=v1.34.1 --alsologtostderr -v=1 --driver=kvm2  --auto-update-drivers=false
version_upgrade_test.go:243: (dbg) Done: out/minikube-linux-amd64 start -p kubernetes-upgrade-009521 --memory=3072 --kubernetes-version=v1.34.1 --alsologtostderr -v=1 --driver=kvm2  --auto-update-drivers=false: (1m2.465874367s)
version_upgrade_test.go:248: (dbg) Run:  kubectl --context kubernetes-upgrade-009521 version --output=json
version_upgrade_test.go:267: Attempting to downgrade Kubernetes (should fail)
version_upgrade_test.go:269: (dbg) Run:  out/minikube-linux-amd64 start -p kubernetes-upgrade-009521 --memory=3072 --kubernetes-version=v1.28.0 --driver=kvm2  --auto-update-drivers=false
version_upgrade_test.go:269: (dbg) Non-zero exit: out/minikube-linux-amd64 start -p kubernetes-upgrade-009521 --memory=3072 --kubernetes-version=v1.28.0 --driver=kvm2  --auto-update-drivers=false: exit status 106 (117.528456ms)

                                                
                                                
-- stdout --
	* [kubernetes-upgrade-009521] minikube v1.37.0 on Ubuntu 22.04 (kvm/amd64)
	  - MINIKUBE_LOCATION=21647
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/21647-6010/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/21647-6010/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-amd64
	  - MINIKUBE_FORCE_SYSTEMD=
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to K8S_DOWNGRADE_UNSUPPORTED: Unable to safely downgrade existing Kubernetes v1.34.1 cluster to v1.28.0
	* Suggestion: 
	
	    1) Recreate the cluster with Kubernetes 1.28.0, by running:
	    
	    minikube delete -p kubernetes-upgrade-009521
	    minikube start -p kubernetes-upgrade-009521 --kubernetes-version=v1.28.0
	    
	    2) Create a second cluster with Kubernetes 1.28.0, by running:
	    
	    minikube start -p kubernetes-upgrade-0095212 --kubernetes-version=v1.28.0
	    
	    3) Use the existing cluster at version Kubernetes 1.34.1, by running:
	    
	    minikube start -p kubernetes-upgrade-009521 --kubernetes-version=v1.34.1
	    

                                                
                                                
** /stderr **
version_upgrade_test.go:273: Attempting restart after unsuccessful downgrade
version_upgrade_test.go:275: (dbg) Run:  out/minikube-linux-amd64 start -p kubernetes-upgrade-009521 --memory=3072 --kubernetes-version=v1.34.1 --alsologtostderr -v=1 --driver=kvm2  --auto-update-drivers=false
version_upgrade_test.go:275: (dbg) Done: out/minikube-linux-amd64 start -p kubernetes-upgrade-009521 --memory=3072 --kubernetes-version=v1.34.1 --alsologtostderr -v=1 --driver=kvm2  --auto-update-drivers=false: (1m19.626733423s)
helpers_test.go:175: Cleaning up "kubernetes-upgrade-009521" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p kubernetes-upgrade-009521
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p kubernetes-upgrade-009521: (1.066694438s)
--- PASS: TestKubernetesUpgrade (203.16s)

                                                
                                    
x
+
TestStoppedBinaryUpgrade/Setup (0.63s)

                                                
                                                
=== RUN   TestStoppedBinaryUpgrade/Setup
--- PASS: TestStoppedBinaryUpgrade/Setup (0.63s)

                                                
                                    
x
+
TestStoppedBinaryUpgrade/Upgrade (162.75s)

                                                
                                                
=== RUN   TestStoppedBinaryUpgrade/Upgrade
version_upgrade_test.go:183: (dbg) Run:  /tmp/minikube-v1.32.0.165357603 start -p stopped-upgrade-407007 --memory=3072 --vm-driver=kvm2  --auto-update-drivers=false
version_upgrade_test.go:183: (dbg) Done: /tmp/minikube-v1.32.0.165357603 start -p stopped-upgrade-407007 --memory=3072 --vm-driver=kvm2  --auto-update-drivers=false: (1m48.429328627s)
version_upgrade_test.go:192: (dbg) Run:  /tmp/minikube-v1.32.0.165357603 -p stopped-upgrade-407007 stop
version_upgrade_test.go:192: (dbg) Done: /tmp/minikube-v1.32.0.165357603 -p stopped-upgrade-407007 stop: (3.781150177s)
version_upgrade_test.go:198: (dbg) Run:  out/minikube-linux-amd64 start -p stopped-upgrade-407007 --memory=3072 --alsologtostderr -v=1 --driver=kvm2  --auto-update-drivers=false
version_upgrade_test.go:198: (dbg) Done: out/minikube-linux-amd64 start -p stopped-upgrade-407007 --memory=3072 --alsologtostderr -v=1 --driver=kvm2  --auto-update-drivers=false: (50.539290027s)
--- PASS: TestStoppedBinaryUpgrade/Upgrade (162.75s)

                                                
                                    
x
+
TestStoppedBinaryUpgrade/MinikubeLogs (1.43s)

                                                
                                                
=== RUN   TestStoppedBinaryUpgrade/MinikubeLogs
version_upgrade_test.go:206: (dbg) Run:  out/minikube-linux-amd64 logs -p stopped-upgrade-407007
E1018 12:17:45.860245    9909 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21647-6010/.minikube/profiles/addons-886198/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
version_upgrade_test.go:206: (dbg) Done: out/minikube-linux-amd64 logs -p stopped-upgrade-407007: (1.425151339s)
--- PASS: TestStoppedBinaryUpgrade/MinikubeLogs (1.43s)

                                                
                                    
x
+
TestNoKubernetes/serial/StartNoK8sWithVersion (0.08s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/StartNoK8sWithVersion
no_kubernetes_test.go:85: (dbg) Run:  out/minikube-linux-amd64 start -p NoKubernetes-215098 --no-kubernetes --kubernetes-version=v1.28.0 --driver=kvm2  --auto-update-drivers=false
no_kubernetes_test.go:85: (dbg) Non-zero exit: out/minikube-linux-amd64 start -p NoKubernetes-215098 --no-kubernetes --kubernetes-version=v1.28.0 --driver=kvm2  --auto-update-drivers=false: exit status 14 (77.864416ms)

                                                
                                                
-- stdout --
	* [NoKubernetes-215098] minikube v1.37.0 on Ubuntu 22.04 (kvm/amd64)
	  - MINIKUBE_LOCATION=21647
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/21647-6010/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/21647-6010/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-amd64
	  - MINIKUBE_FORCE_SYSTEMD=
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to MK_USAGE: cannot specify --kubernetes-version with --no-kubernetes,
	to unset a global config run:
	
	$ minikube config unset kubernetes-version

                                                
                                                
** /stderr **
--- PASS: TestNoKubernetes/serial/StartNoK8sWithVersion (0.08s)

                                                
                                    
x
+
TestNoKubernetes/serial/StartWithK8s (85.46s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/StartWithK8s
no_kubernetes_test.go:97: (dbg) Run:  out/minikube-linux-amd64 start -p NoKubernetes-215098 --memory=3072 --alsologtostderr -v=5 --driver=kvm2  --auto-update-drivers=false
E1018 12:19:35.845864    9909 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21647-6010/.minikube/profiles/functional-897621/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
no_kubernetes_test.go:97: (dbg) Done: out/minikube-linux-amd64 start -p NoKubernetes-215098 --memory=3072 --alsologtostderr -v=5 --driver=kvm2  --auto-update-drivers=false: (1m25.081653979s)
no_kubernetes_test.go:202: (dbg) Run:  out/minikube-linux-amd64 -p NoKubernetes-215098 status -o json
--- PASS: TestNoKubernetes/serial/StartWithK8s (85.46s)

                                                
                                    
x
+
TestPause/serial/Start (107s)

                                                
                                                
=== RUN   TestPause/serial/Start
pause_test.go:80: (dbg) Run:  out/minikube-linux-amd64 start -p pause-709314 --memory=3072 --install-addons=false --wait=all --driver=kvm2  --auto-update-drivers=false
E1018 12:20:09.960447    9909 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21647-6010/.minikube/profiles/skaffold-681264/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
pause_test.go:80: (dbg) Done: out/minikube-linux-amd64 start -p pause-709314 --memory=3072 --install-addons=false --wait=all --driver=kvm2  --auto-update-drivers=false: (1m46.998845411s)
--- PASS: TestPause/serial/Start (107.00s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/FirstStart (111.89s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/FirstStart
start_stop_delete_test.go:184: (dbg) Run:  out/minikube-linux-amd64 start -p old-k8s-version-667489 --memory=3072 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=kvm2  --auto-update-drivers=false --kubernetes-version=v1.28.0
start_stop_delete_test.go:184: (dbg) Done: out/minikube-linux-amd64 start -p old-k8s-version-667489 --memory=3072 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=kvm2  --auto-update-drivers=false --kubernetes-version=v1.28.0: (1m51.886380456s)
--- PASS: TestStartStop/group/old-k8s-version/serial/FirstStart (111.89s)

                                                
                                    
x
+
TestNoKubernetes/serial/StartWithStopK8s (33.81s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/StartWithStopK8s
no_kubernetes_test.go:114: (dbg) Run:  out/minikube-linux-amd64 start -p NoKubernetes-215098 --no-kubernetes --memory=3072 --alsologtostderr -v=5 --driver=kvm2  --auto-update-drivers=false
E1018 12:21:11.404389    9909 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21647-6010/.minikube/profiles/skaffold-681264/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
no_kubernetes_test.go:114: (dbg) Done: out/minikube-linux-amd64 start -p NoKubernetes-215098 --no-kubernetes --memory=3072 --alsologtostderr -v=5 --driver=kvm2  --auto-update-drivers=false: (32.674118512s)
no_kubernetes_test.go:202: (dbg) Run:  out/minikube-linux-amd64 -p NoKubernetes-215098 status -o json
no_kubernetes_test.go:202: (dbg) Non-zero exit: out/minikube-linux-amd64 -p NoKubernetes-215098 status -o json: exit status 2 (260.179697ms)

                                                
                                                
-- stdout --
	{"Name":"NoKubernetes-215098","Host":"Running","Kubelet":"Stopped","APIServer":"Stopped","Kubeconfig":"Configured","Worker":false}

                                                
                                                
-- /stdout --
no_kubernetes_test.go:126: (dbg) Run:  out/minikube-linux-amd64 delete -p NoKubernetes-215098
--- PASS: TestNoKubernetes/serial/StartWithStopK8s (33.81s)

                                                
                                    
x
+
TestNoKubernetes/serial/Start (24s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/Start
no_kubernetes_test.go:138: (dbg) Run:  out/minikube-linux-amd64 start -p NoKubernetes-215098 --no-kubernetes --memory=3072 --alsologtostderr -v=5 --driver=kvm2  --auto-update-drivers=false
E1018 12:21:33.118766    9909 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21647-6010/.minikube/profiles/gvisor-073301/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1018 12:21:33.125198    9909 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21647-6010/.minikube/profiles/gvisor-073301/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1018 12:21:33.136618    9909 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21647-6010/.minikube/profiles/gvisor-073301/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1018 12:21:33.158100    9909 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21647-6010/.minikube/profiles/gvisor-073301/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1018 12:21:33.199605    9909 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21647-6010/.minikube/profiles/gvisor-073301/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1018 12:21:33.281819    9909 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21647-6010/.minikube/profiles/gvisor-073301/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1018 12:21:33.443411    9909 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21647-6010/.minikube/profiles/gvisor-073301/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1018 12:21:33.765260    9909 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21647-6010/.minikube/profiles/gvisor-073301/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1018 12:21:34.407592    9909 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21647-6010/.minikube/profiles/gvisor-073301/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1018 12:21:35.689542    9909 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21647-6010/.minikube/profiles/gvisor-073301/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1018 12:21:38.251114    9909 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21647-6010/.minikube/profiles/gvisor-073301/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1018 12:21:43.373317    9909 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21647-6010/.minikube/profiles/gvisor-073301/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
no_kubernetes_test.go:138: (dbg) Done: out/minikube-linux-amd64 start -p NoKubernetes-215098 --no-kubernetes --memory=3072 --alsologtostderr -v=5 --driver=kvm2  --auto-update-drivers=false: (23.995213574s)
--- PASS: TestNoKubernetes/serial/Start (24.00s)

                                                
                                    
x
+
TestNoKubernetes/serial/VerifyK8sNotRunning (0.21s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/VerifyK8sNotRunning
no_kubernetes_test.go:149: (dbg) Run:  out/minikube-linux-amd64 ssh -p NoKubernetes-215098 "sudo systemctl is-active --quiet service kubelet"
no_kubernetes_test.go:149: (dbg) Non-zero exit: out/minikube-linux-amd64 ssh -p NoKubernetes-215098 "sudo systemctl is-active --quiet service kubelet": exit status 1 (209.769198ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 4

                                                
                                                
** /stderr **
--- PASS: TestNoKubernetes/serial/VerifyK8sNotRunning (0.21s)

                                                
                                    
x
+
TestNoKubernetes/serial/ProfileList (1.87s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/ProfileList
no_kubernetes_test.go:171: (dbg) Run:  out/minikube-linux-amd64 profile list
no_kubernetes_test.go:181: (dbg) Run:  out/minikube-linux-amd64 profile list --output=json
--- PASS: TestNoKubernetes/serial/ProfileList (1.87s)

                                                
                                    
x
+
TestNoKubernetes/serial/Stop (1.34s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/Stop
no_kubernetes_test.go:160: (dbg) Run:  out/minikube-linux-amd64 stop -p NoKubernetes-215098
no_kubernetes_test.go:160: (dbg) Done: out/minikube-linux-amd64 stop -p NoKubernetes-215098: (1.337480285s)
--- PASS: TestNoKubernetes/serial/Stop (1.34s)

                                                
                                    
x
+
TestNoKubernetes/serial/StartNoArgs (21.81s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/StartNoArgs
no_kubernetes_test.go:193: (dbg) Run:  out/minikube-linux-amd64 start -p NoKubernetes-215098 --driver=kvm2  --auto-update-drivers=false
no_kubernetes_test.go:193: (dbg) Done: out/minikube-linux-amd64 start -p NoKubernetes-215098 --driver=kvm2  --auto-update-drivers=false: (21.808624714s)
--- PASS: TestNoKubernetes/serial/StartNoArgs (21.81s)

                                                
                                    
x
+
TestPause/serial/SecondStartNoReconfiguration (70.65s)

                                                
                                                
=== RUN   TestPause/serial/SecondStartNoReconfiguration
pause_test.go:92: (dbg) Run:  out/minikube-linux-amd64 start -p pause-709314 --alsologtostderr -v=1 --driver=kvm2  --auto-update-drivers=false
E1018 12:21:53.615557    9909 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21647-6010/.minikube/profiles/gvisor-073301/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
pause_test.go:92: (dbg) Done: out/minikube-linux-amd64 start -p pause-709314 --alsologtostderr -v=1 --driver=kvm2  --auto-update-drivers=false: (1m10.619952679s)
--- PASS: TestPause/serial/SecondStartNoReconfiguration (70.65s)

                                                
                                    
x
+
TestNoKubernetes/serial/VerifyK8sNotRunningSecond (0.22s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/VerifyK8sNotRunningSecond
no_kubernetes_test.go:149: (dbg) Run:  out/minikube-linux-amd64 ssh -p NoKubernetes-215098 "sudo systemctl is-active --quiet service kubelet"
no_kubernetes_test.go:149: (dbg) Non-zero exit: out/minikube-linux-amd64 ssh -p NoKubernetes-215098 "sudo systemctl is-active --quiet service kubelet": exit status 1 (222.02137ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 4

                                                
                                                
** /stderr **
--- PASS: TestNoKubernetes/serial/VerifyK8sNotRunningSecond (0.22s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/FirstStart (100.45s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/FirstStart
start_stop_delete_test.go:184: (dbg) Run:  out/minikube-linux-amd64 start -p no-preload-839073 --memory=3072 --alsologtostderr --wait=true --preload=false --driver=kvm2  --auto-update-drivers=false --kubernetes-version=v1.34.1
E1018 12:22:14.097643    9909 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21647-6010/.minikube/profiles/gvisor-073301/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
start_stop_delete_test.go:184: (dbg) Done: out/minikube-linux-amd64 start -p no-preload-839073 --memory=3072 --alsologtostderr --wait=true --preload=false --driver=kvm2  --auto-update-drivers=false --kubernetes-version=v1.34.1: (1m40.448553159s)
--- PASS: TestStartStop/group/no-preload/serial/FirstStart (100.45s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/DeployApp (9.39s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/DeployApp
start_stop_delete_test.go:194: (dbg) Run:  kubectl --context old-k8s-version-667489 create -f testdata/busybox.yaml
start_stop_delete_test.go:194: (dbg) TestStartStop/group/old-k8s-version/serial/DeployApp: waiting 8m0s for pods matching "integration-test=busybox" in namespace "default" ...
helpers_test.go:352: "busybox" [8b776a4e-677e-4f50-adfb-02452eb1a2b5] Pending / Ready:ContainersNotReady (containers with unready status: [busybox]) / ContainersReady:ContainersNotReady (containers with unready status: [busybox])
helpers_test.go:352: "busybox" [8b776a4e-677e-4f50-adfb-02452eb1a2b5] Running
E1018 12:22:33.326225    9909 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21647-6010/.minikube/profiles/skaffold-681264/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
start_stop_delete_test.go:194: (dbg) TestStartStop/group/old-k8s-version/serial/DeployApp: integration-test=busybox healthy within 9.004569383s
start_stop_delete_test.go:194: (dbg) Run:  kubectl --context old-k8s-version-667489 exec busybox -- /bin/sh -c "ulimit -n"
--- PASS: TestStartStop/group/old-k8s-version/serial/DeployApp (9.39s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/EnableAddonWhileActive (1.15s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/EnableAddonWhileActive
start_stop_delete_test.go:203: (dbg) Run:  out/minikube-linux-amd64 addons enable metrics-server -p old-k8s-version-667489 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain
start_stop_delete_test.go:203: (dbg) Done: out/minikube-linux-amd64 addons enable metrics-server -p old-k8s-version-667489 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain: (1.062450504s)
start_stop_delete_test.go:213: (dbg) Run:  kubectl --context old-k8s-version-667489 describe deploy/metrics-server -n kube-system
--- PASS: TestStartStop/group/old-k8s-version/serial/EnableAddonWhileActive (1.15s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/Stop (14.43s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/Stop
start_stop_delete_test.go:226: (dbg) Run:  out/minikube-linux-amd64 stop -p old-k8s-version-667489 --alsologtostderr -v=3
start_stop_delete_test.go:226: (dbg) Done: out/minikube-linux-amd64 stop -p old-k8s-version-667489 --alsologtostderr -v=3: (14.42511568s)
--- PASS: TestStartStop/group/old-k8s-version/serial/Stop (14.43s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/EnableAddonAfterStop (0.22s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/EnableAddonAfterStop
start_stop_delete_test.go:237: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p old-k8s-version-667489 -n old-k8s-version-667489
start_stop_delete_test.go:237: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p old-k8s-version-667489 -n old-k8s-version-667489: exit status 7 (82.327821ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:237: status error: exit status 7 (may be ok)
start_stop_delete_test.go:244: (dbg) Run:  out/minikube-linux-amd64 addons enable dashboard -p old-k8s-version-667489 --images=MetricsScraper=registry.k8s.io/echoserver:1.4
--- PASS: TestStartStop/group/old-k8s-version/serial/EnableAddonAfterStop (0.22s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/SecondStart (51.94s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/SecondStart
start_stop_delete_test.go:254: (dbg) Run:  out/minikube-linux-amd64 start -p old-k8s-version-667489 --memory=3072 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=kvm2  --auto-update-drivers=false --kubernetes-version=v1.28.0
E1018 12:22:55.059526    9909 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21647-6010/.minikube/profiles/gvisor-073301/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
start_stop_delete_test.go:254: (dbg) Done: out/minikube-linux-amd64 start -p old-k8s-version-667489 --memory=3072 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=kvm2  --auto-update-drivers=false --kubernetes-version=v1.28.0: (51.582117001s)
start_stop_delete_test.go:260: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p old-k8s-version-667489 -n old-k8s-version-667489
--- PASS: TestStartStop/group/old-k8s-version/serial/SecondStart (51.94s)

                                                
                                    
x
+
TestPause/serial/Pause (0.91s)

                                                
                                                
=== RUN   TestPause/serial/Pause
pause_test.go:110: (dbg) Run:  out/minikube-linux-amd64 pause -p pause-709314 --alsologtostderr -v=5
--- PASS: TestPause/serial/Pause (0.91s)

                                                
                                    
x
+
TestPause/serial/VerifyStatus (0.29s)

                                                
                                                
=== RUN   TestPause/serial/VerifyStatus
status_test.go:76: (dbg) Run:  out/minikube-linux-amd64 status -p pause-709314 --output=json --layout=cluster
status_test.go:76: (dbg) Non-zero exit: out/minikube-linux-amd64 status -p pause-709314 --output=json --layout=cluster: exit status 2 (286.02928ms)

                                                
                                                
-- stdout --
	{"Name":"pause-709314","StatusCode":418,"StatusName":"Paused","Step":"Done","StepDetail":"* Paused 12 containers in: kube-system, kubernetes-dashboard, istio-operator","BinaryVersion":"v1.37.0","Components":{"kubeconfig":{"Name":"kubeconfig","StatusCode":200,"StatusName":"OK"}},"Nodes":[{"Name":"pause-709314","StatusCode":200,"StatusName":"OK","Components":{"apiserver":{"Name":"apiserver","StatusCode":418,"StatusName":"Paused"},"kubelet":{"Name":"kubelet","StatusCode":405,"StatusName":"Stopped"}}}]}

                                                
                                                
-- /stdout --
--- PASS: TestPause/serial/VerifyStatus (0.29s)

                                                
                                    
x
+
TestPause/serial/Unpause (0.68s)

                                                
                                                
=== RUN   TestPause/serial/Unpause
pause_test.go:121: (dbg) Run:  out/minikube-linux-amd64 unpause -p pause-709314 --alsologtostderr -v=5
--- PASS: TestPause/serial/Unpause (0.68s)

                                                
                                    
x
+
TestPause/serial/PauseAgain (0.91s)

                                                
                                                
=== RUN   TestPause/serial/PauseAgain
pause_test.go:110: (dbg) Run:  out/minikube-linux-amd64 pause -p pause-709314 --alsologtostderr -v=5
--- PASS: TestPause/serial/PauseAgain (0.91s)

                                                
                                    
x
+
TestPause/serial/DeletePaused (0.89s)

                                                
                                                
=== RUN   TestPause/serial/DeletePaused
pause_test.go:132: (dbg) Run:  out/minikube-linux-amd64 delete -p pause-709314 --alsologtostderr -v=5
--- PASS: TestPause/serial/DeletePaused (0.89s)

                                                
                                    
x
+
TestPause/serial/VerifyDeletedResources (17.54s)

                                                
                                                
=== RUN   TestPause/serial/VerifyDeletedResources
pause_test.go:142: (dbg) Run:  out/minikube-linux-amd64 profile list --output json
pause_test.go:142: (dbg) Done: out/minikube-linux-amd64 profile list --output json: (17.537181039s)
--- PASS: TestPause/serial/VerifyDeletedResources (17.54s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/FirstStart (91.89s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/FirstStart
start_stop_delete_test.go:184: (dbg) Run:  out/minikube-linux-amd64 start -p embed-certs-270191 --memory=3072 --alsologtostderr --wait=true --embed-certs --driver=kvm2  --auto-update-drivers=false --kubernetes-version=v1.34.1
start_stop_delete_test.go:184: (dbg) Done: out/minikube-linux-amd64 start -p embed-certs-270191 --memory=3072 --alsologtostderr --wait=true --embed-certs --driver=kvm2  --auto-update-drivers=false --kubernetes-version=v1.34.1: (1m31.889397392s)
--- PASS: TestStartStop/group/embed-certs/serial/FirstStart (91.89s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/FirstStart (72.93s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/FirstStart
start_stop_delete_test.go:184: (dbg) Run:  out/minikube-linux-amd64 start -p default-k8s-diff-port-948988 --memory=3072 --alsologtostderr --wait=true --apiserver-port=8444 --driver=kvm2  --auto-update-drivers=false --kubernetes-version=v1.34.1
start_stop_delete_test.go:184: (dbg) Done: out/minikube-linux-amd64 start -p default-k8s-diff-port-948988 --memory=3072 --alsologtostderr --wait=true --apiserver-port=8444 --driver=kvm2  --auto-update-drivers=false --kubernetes-version=v1.34.1: (1m12.92823971s)
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/FirstStart (72.93s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop (8.01s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop
start_stop_delete_test.go:272: (dbg) TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:352: "kubernetes-dashboard-8694d4445c-k96vt" [a7ed91c2-c443-4916-878d-47686dde45d6] Pending / Ready:ContainersNotReady (containers with unready status: [kubernetes-dashboard]) / ContainersReady:ContainersNotReady (containers with unready status: [kubernetes-dashboard])
helpers_test.go:352: "kubernetes-dashboard-8694d4445c-k96vt" [a7ed91c2-c443-4916-878d-47686dde45d6] Running
start_stop_delete_test.go:272: (dbg) TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 8.012319539s
--- PASS: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop (8.01s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop (5.11s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop
start_stop_delete_test.go:285: (dbg) TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:352: "kubernetes-dashboard-8694d4445c-k96vt" [a7ed91c2-c443-4916-878d-47686dde45d6] Running
start_stop_delete_test.go:285: (dbg) TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 5.020791702s
start_stop_delete_test.go:289: (dbg) Run:  kubectl --context old-k8s-version-667489 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard
--- PASS: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop (5.11s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/DeployApp (11.82s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/DeployApp
start_stop_delete_test.go:194: (dbg) Run:  kubectl --context no-preload-839073 create -f testdata/busybox.yaml
start_stop_delete_test.go:194: (dbg) TestStartStop/group/no-preload/serial/DeployApp: waiting 8m0s for pods matching "integration-test=busybox" in namespace "default" ...
helpers_test.go:352: "busybox" [bda4fcdf-f461-4f75-9d32-b2b79c6d4716] Pending
helpers_test.go:352: "busybox" [bda4fcdf-f461-4f75-9d32-b2b79c6d4716] Pending / Ready:ContainersNotReady (containers with unready status: [busybox]) / ContainersReady:ContainersNotReady (containers with unready status: [busybox])
helpers_test.go:352: "busybox" [bda4fcdf-f461-4f75-9d32-b2b79c6d4716] Running
start_stop_delete_test.go:194: (dbg) TestStartStop/group/no-preload/serial/DeployApp: integration-test=busybox healthy within 11.00381914s
start_stop_delete_test.go:194: (dbg) Run:  kubectl --context no-preload-839073 exec busybox -- /bin/sh -c "ulimit -n"
--- PASS: TestStartStop/group/no-preload/serial/DeployApp (11.82s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/VerifyKubernetesImages (0.23s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/VerifyKubernetesImages
start_stop_delete_test.go:302: (dbg) Run:  out/minikube-linux-amd64 -p old-k8s-version-667489 image list --format=json
start_stop_delete_test.go:302: Found non-minikube image: gcr.io/k8s-minikube/gvisor-addon:2
start_stop_delete_test.go:302: Found non-minikube image: gcr.io/k8s-minikube/busybox:1.28.4-glibc
--- PASS: TestStartStop/group/old-k8s-version/serial/VerifyKubernetesImages (0.23s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/Pause (3.25s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/Pause
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-amd64 pause -p old-k8s-version-667489 --alsologtostderr -v=1
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p old-k8s-version-667489 -n old-k8s-version-667489
start_stop_delete_test.go:309: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.APIServer}} -p old-k8s-version-667489 -n old-k8s-version-667489: exit status 2 (301.001647ms)

                                                
                                                
-- stdout --
	Paused

                                                
                                                
-- /stdout --
start_stop_delete_test.go:309: status error: exit status 2 (may be ok)
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Kubelet}} -p old-k8s-version-667489 -n old-k8s-version-667489
start_stop_delete_test.go:309: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Kubelet}} -p old-k8s-version-667489 -n old-k8s-version-667489: exit status 2 (307.174269ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:309: status error: exit status 2 (may be ok)
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-amd64 unpause -p old-k8s-version-667489 --alsologtostderr -v=1
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p old-k8s-version-667489 -n old-k8s-version-667489
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Kubelet}} -p old-k8s-version-667489 -n old-k8s-version-667489
--- PASS: TestStartStop/group/old-k8s-version/serial/Pause (3.25s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/FirstStart (64.75s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/FirstStart
start_stop_delete_test.go:184: (dbg) Run:  out/minikube-linux-amd64 start -p newest-cni-661287 --memory=3072 --alsologtostderr --wait=apiserver,system_pods,default_sa --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=kvm2  --auto-update-drivers=false --kubernetes-version=v1.34.1
start_stop_delete_test.go:184: (dbg) Done: out/minikube-linux-amd64 start -p newest-cni-661287 --memory=3072 --alsologtostderr --wait=apiserver,system_pods,default_sa --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=kvm2  --auto-update-drivers=false --kubernetes-version=v1.34.1: (1m4.751522072s)
--- PASS: TestStartStop/group/newest-cni/serial/FirstStart (64.75s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/EnableAddonWhileActive (1.06s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/EnableAddonWhileActive
start_stop_delete_test.go:203: (dbg) Run:  out/minikube-linux-amd64 addons enable metrics-server -p no-preload-839073 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain
start_stop_delete_test.go:213: (dbg) Run:  kubectl --context no-preload-839073 describe deploy/metrics-server -n kube-system
--- PASS: TestStartStop/group/no-preload/serial/EnableAddonWhileActive (1.06s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/Stop (13.92s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/Stop
start_stop_delete_test.go:226: (dbg) Run:  out/minikube-linux-amd64 stop -p no-preload-839073 --alsologtostderr -v=3
E1018 12:24:16.981766    9909 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21647-6010/.minikube/profiles/gvisor-073301/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
start_stop_delete_test.go:226: (dbg) Done: out/minikube-linux-amd64 stop -p no-preload-839073 --alsologtostderr -v=3: (13.919979607s)
--- PASS: TestStartStop/group/no-preload/serial/Stop (13.92s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/EnableAddonAfterStop (0.25s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/EnableAddonAfterStop
start_stop_delete_test.go:237: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p no-preload-839073 -n no-preload-839073
start_stop_delete_test.go:237: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p no-preload-839073 -n no-preload-839073: exit status 7 (85.995095ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:237: status error: exit status 7 (may be ok)
start_stop_delete_test.go:244: (dbg) Run:  out/minikube-linux-amd64 addons enable dashboard -p no-preload-839073 --images=MetricsScraper=registry.k8s.io/echoserver:1.4
--- PASS: TestStartStop/group/no-preload/serial/EnableAddonAfterStop (0.25s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/SecondStart (58.31s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/SecondStart
start_stop_delete_test.go:254: (dbg) Run:  out/minikube-linux-amd64 start -p no-preload-839073 --memory=3072 --alsologtostderr --wait=true --preload=false --driver=kvm2  --auto-update-drivers=false --kubernetes-version=v1.34.1
E1018 12:24:35.845796    9909 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21647-6010/.minikube/profiles/functional-897621/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
start_stop_delete_test.go:254: (dbg) Done: out/minikube-linux-amd64 start -p no-preload-839073 --memory=3072 --alsologtostderr --wait=true --preload=false --driver=kvm2  --auto-update-drivers=false --kubernetes-version=v1.34.1: (58.005311558s)
start_stop_delete_test.go:260: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p no-preload-839073 -n no-preload-839073
--- PASS: TestStartStop/group/no-preload/serial/SecondStart (58.31s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/DeployApp (10.35s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/DeployApp
start_stop_delete_test.go:194: (dbg) Run:  kubectl --context default-k8s-diff-port-948988 create -f testdata/busybox.yaml
start_stop_delete_test.go:194: (dbg) TestStartStop/group/default-k8s-diff-port/serial/DeployApp: waiting 8m0s for pods matching "integration-test=busybox" in namespace "default" ...
helpers_test.go:352: "busybox" [75d28ae0-a566-4ce6-a6a6-6a4c2dcf7a82] Pending
helpers_test.go:352: "busybox" [75d28ae0-a566-4ce6-a6a6-6a4c2dcf7a82] Pending / Ready:ContainersNotReady (containers with unready status: [busybox]) / ContainersReady:ContainersNotReady (containers with unready status: [busybox])
E1018 12:24:49.461591    9909 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21647-6010/.minikube/profiles/skaffold-681264/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
helpers_test.go:352: "busybox" [75d28ae0-a566-4ce6-a6a6-6a4c2dcf7a82] Running
start_stop_delete_test.go:194: (dbg) TestStartStop/group/default-k8s-diff-port/serial/DeployApp: integration-test=busybox healthy within 10.005206392s
start_stop_delete_test.go:194: (dbg) Run:  kubectl --context default-k8s-diff-port-948988 exec busybox -- /bin/sh -c "ulimit -n"
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/DeployApp (10.35s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/DeployApp (9.37s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/DeployApp
start_stop_delete_test.go:194: (dbg) Run:  kubectl --context embed-certs-270191 create -f testdata/busybox.yaml
start_stop_delete_test.go:194: (dbg) TestStartStop/group/embed-certs/serial/DeployApp: waiting 8m0s for pods matching "integration-test=busybox" in namespace "default" ...
helpers_test.go:352: "busybox" [e158743b-3de4-43ef-8407-1732afb71a55] Pending
helpers_test.go:352: "busybox" [e158743b-3de4-43ef-8407-1732afb71a55] Pending / Ready:ContainersNotReady (containers with unready status: [busybox]) / ContainersReady:ContainersNotReady (containers with unready status: [busybox])
helpers_test.go:352: "busybox" [e158743b-3de4-43ef-8407-1732afb71a55] Running
start_stop_delete_test.go:194: (dbg) TestStartStop/group/embed-certs/serial/DeployApp: integration-test=busybox healthy within 9.006455591s
start_stop_delete_test.go:194: (dbg) Run:  kubectl --context embed-certs-270191 exec busybox -- /bin/sh -c "ulimit -n"
--- PASS: TestStartStop/group/embed-certs/serial/DeployApp (9.37s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/EnableAddonWhileActive (1.22s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/EnableAddonWhileActive
start_stop_delete_test.go:203: (dbg) Run:  out/minikube-linux-amd64 addons enable metrics-server -p default-k8s-diff-port-948988 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain
start_stop_delete_test.go:203: (dbg) Done: out/minikube-linux-amd64 addons enable metrics-server -p default-k8s-diff-port-948988 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain: (1.1336058s)
start_stop_delete_test.go:213: (dbg) Run:  kubectl --context default-k8s-diff-port-948988 describe deploy/metrics-server -n kube-system
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/EnableAddonWhileActive (1.22s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/Stop (13.7s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/Stop
start_stop_delete_test.go:226: (dbg) Run:  out/minikube-linux-amd64 stop -p default-k8s-diff-port-948988 --alsologtostderr -v=3
start_stop_delete_test.go:226: (dbg) Done: out/minikube-linux-amd64 stop -p default-k8s-diff-port-948988 --alsologtostderr -v=3: (13.697372931s)
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/Stop (13.70s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/EnableAddonWhileActive (1.27s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/EnableAddonWhileActive
start_stop_delete_test.go:203: (dbg) Run:  out/minikube-linux-amd64 addons enable metrics-server -p embed-certs-270191 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain
start_stop_delete_test.go:203: (dbg) Done: out/minikube-linux-amd64 addons enable metrics-server -p embed-certs-270191 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain: (1.167736792s)
start_stop_delete_test.go:213: (dbg) Run:  kubectl --context embed-certs-270191 describe deploy/metrics-server -n kube-system
--- PASS: TestStartStop/group/embed-certs/serial/EnableAddonWhileActive (1.27s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/DeployApp (0s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/DeployApp
--- PASS: TestStartStop/group/newest-cni/serial/DeployApp (0.00s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/EnableAddonWhileActive (1.18s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/EnableAddonWhileActive
start_stop_delete_test.go:203: (dbg) Run:  out/minikube-linux-amd64 addons enable metrics-server -p newest-cni-661287 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain
start_stop_delete_test.go:203: (dbg) Done: out/minikube-linux-amd64 addons enable metrics-server -p newest-cni-661287 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain: (1.17973969s)
start_stop_delete_test.go:209: WARNING: cni mode requires additional setup before pods can schedule :(
--- PASS: TestStartStop/group/newest-cni/serial/EnableAddonWhileActive (1.18s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/Stop (12.3s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/Stop
start_stop_delete_test.go:226: (dbg) Run:  out/minikube-linux-amd64 stop -p embed-certs-270191 --alsologtostderr -v=3
start_stop_delete_test.go:226: (dbg) Done: out/minikube-linux-amd64 stop -p embed-certs-270191 --alsologtostderr -v=3: (12.302259978s)
--- PASS: TestStartStop/group/embed-certs/serial/Stop (12.30s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/Stop (13.04s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/Stop
start_stop_delete_test.go:226: (dbg) Run:  out/minikube-linux-amd64 stop -p newest-cni-661287 --alsologtostderr -v=3
start_stop_delete_test.go:226: (dbg) Done: out/minikube-linux-amd64 stop -p newest-cni-661287 --alsologtostderr -v=3: (13.037975721s)
--- PASS: TestStartStop/group/newest-cni/serial/Stop (13.04s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/EnableAddonAfterStop (0.21s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/EnableAddonAfterStop
start_stop_delete_test.go:237: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p default-k8s-diff-port-948988 -n default-k8s-diff-port-948988
start_stop_delete_test.go:237: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p default-k8s-diff-port-948988 -n default-k8s-diff-port-948988: exit status 7 (79.031799ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:237: status error: exit status 7 (may be ok)
start_stop_delete_test.go:244: (dbg) Run:  out/minikube-linux-amd64 addons enable dashboard -p default-k8s-diff-port-948988 --images=MetricsScraper=registry.k8s.io/echoserver:1.4
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/EnableAddonAfterStop (0.21s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/SecondStart (44.31s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/SecondStart
start_stop_delete_test.go:254: (dbg) Run:  out/minikube-linux-amd64 start -p default-k8s-diff-port-948988 --memory=3072 --alsologtostderr --wait=true --apiserver-port=8444 --driver=kvm2  --auto-update-drivers=false --kubernetes-version=v1.34.1
E1018 12:25:17.167513    9909 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21647-6010/.minikube/profiles/skaffold-681264/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
start_stop_delete_test.go:254: (dbg) Done: out/minikube-linux-amd64 start -p default-k8s-diff-port-948988 --memory=3072 --alsologtostderr --wait=true --apiserver-port=8444 --driver=kvm2  --auto-update-drivers=false --kubernetes-version=v1.34.1: (43.943523485s)
start_stop_delete_test.go:260: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p default-k8s-diff-port-948988 -n default-k8s-diff-port-948988
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/SecondStart (44.31s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/EnableAddonAfterStop (0.23s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/EnableAddonAfterStop
start_stop_delete_test.go:237: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p embed-certs-270191 -n embed-certs-270191
start_stop_delete_test.go:237: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p embed-certs-270191 -n embed-certs-270191: exit status 7 (81.964346ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:237: status error: exit status 7 (may be ok)
start_stop_delete_test.go:244: (dbg) Run:  out/minikube-linux-amd64 addons enable dashboard -p embed-certs-270191 --images=MetricsScraper=registry.k8s.io/echoserver:1.4
--- PASS: TestStartStop/group/embed-certs/serial/EnableAddonAfterStop (0.23s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/SecondStart (64.79s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/SecondStart
start_stop_delete_test.go:254: (dbg) Run:  out/minikube-linux-amd64 start -p embed-certs-270191 --memory=3072 --alsologtostderr --wait=true --embed-certs --driver=kvm2  --auto-update-drivers=false --kubernetes-version=v1.34.1
start_stop_delete_test.go:254: (dbg) Done: out/minikube-linux-amd64 start -p embed-certs-270191 --memory=3072 --alsologtostderr --wait=true --embed-certs --driver=kvm2  --auto-update-drivers=false --kubernetes-version=v1.34.1: (1m4.349160906s)
start_stop_delete_test.go:260: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p embed-certs-270191 -n embed-certs-270191
--- PASS: TestStartStop/group/embed-certs/serial/SecondStart (64.79s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/UserAppExistsAfterStop (7.01s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/UserAppExistsAfterStop
start_stop_delete_test.go:272: (dbg) TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:352: "kubernetes-dashboard-855c9754f9-rj4dm" [bfe6c867-f3a0-4648-90bd-dee6b3acbd22] Pending / Ready:ContainersNotReady (containers with unready status: [kubernetes-dashboard]) / ContainersReady:ContainersNotReady (containers with unready status: [kubernetes-dashboard])
helpers_test.go:352: "kubernetes-dashboard-855c9754f9-rj4dm" [bfe6c867-f3a0-4648-90bd-dee6b3acbd22] Running
start_stop_delete_test.go:272: (dbg) TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 7.005293572s
--- PASS: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop (7.01s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/EnableAddonAfterStop (0.19s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/EnableAddonAfterStop
start_stop_delete_test.go:237: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p newest-cni-661287 -n newest-cni-661287
start_stop_delete_test.go:237: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p newest-cni-661287 -n newest-cni-661287: exit status 7 (66.776305ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:237: status error: exit status 7 (may be ok)
start_stop_delete_test.go:244: (dbg) Run:  out/minikube-linux-amd64 addons enable dashboard -p newest-cni-661287 --images=MetricsScraper=registry.k8s.io/echoserver:1.4
--- PASS: TestStartStop/group/newest-cni/serial/EnableAddonAfterStop (0.19s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/SecondStart (102.28s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/SecondStart
start_stop_delete_test.go:254: (dbg) Run:  out/minikube-linux-amd64 start -p newest-cni-661287 --memory=3072 --alsologtostderr --wait=apiserver,system_pods,default_sa --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=kvm2  --auto-update-drivers=false --kubernetes-version=v1.34.1
start_stop_delete_test.go:254: (dbg) Done: out/minikube-linux-amd64 start -p newest-cni-661287 --memory=3072 --alsologtostderr --wait=apiserver,system_pods,default_sa --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=kvm2  --auto-update-drivers=false --kubernetes-version=v1.34.1: (1m41.883086411s)
start_stop_delete_test.go:260: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p newest-cni-661287 -n newest-cni-661287
--- PASS: TestStartStop/group/newest-cni/serial/SecondStart (102.28s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/AddonExistsAfterStop (5.08s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/AddonExistsAfterStop
start_stop_delete_test.go:285: (dbg) TestStartStop/group/no-preload/serial/AddonExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:352: "kubernetes-dashboard-855c9754f9-rj4dm" [bfe6c867-f3a0-4648-90bd-dee6b3acbd22] Running
start_stop_delete_test.go:285: (dbg) TestStartStop/group/no-preload/serial/AddonExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 5.00383065s
start_stop_delete_test.go:289: (dbg) Run:  kubectl --context no-preload-839073 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard
--- PASS: TestStartStop/group/no-preload/serial/AddonExistsAfterStop (5.08s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/VerifyKubernetesImages (0.22s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/VerifyKubernetesImages
start_stop_delete_test.go:302: (dbg) Run:  out/minikube-linux-amd64 -p no-preload-839073 image list --format=json
start_stop_delete_test.go:302: Found non-minikube image: gcr.io/k8s-minikube/gvisor-addon:2
start_stop_delete_test.go:302: Found non-minikube image: gcr.io/k8s-minikube/busybox:1.28.4-glibc
--- PASS: TestStartStop/group/no-preload/serial/VerifyKubernetesImages (0.22s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/Pause (2.96s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/Pause
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-amd64 pause -p no-preload-839073 --alsologtostderr -v=1
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p no-preload-839073 -n no-preload-839073
start_stop_delete_test.go:309: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.APIServer}} -p no-preload-839073 -n no-preload-839073: exit status 2 (270.585915ms)

                                                
                                                
-- stdout --
	Paused

                                                
                                                
-- /stdout --
start_stop_delete_test.go:309: status error: exit status 2 (may be ok)
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Kubelet}} -p no-preload-839073 -n no-preload-839073
start_stop_delete_test.go:309: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Kubelet}} -p no-preload-839073 -n no-preload-839073: exit status 2 (286.80442ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:309: status error: exit status 2 (may be ok)
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-amd64 unpause -p no-preload-839073 --alsologtostderr -v=1
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p no-preload-839073 -n no-preload-839073
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Kubelet}} -p no-preload-839073 -n no-preload-839073
--- PASS: TestStartStop/group/no-preload/serial/Pause (2.96s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/Start (120.02s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-amd64 start -p auto-720125 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --driver=kvm2  --auto-update-drivers=false
net_test.go:112: (dbg) Done: out/minikube-linux-amd64 start -p auto-720125 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --driver=kvm2  --auto-update-drivers=false: (2m0.019898024s)
--- PASS: TestNetworkPlugins/group/auto/Start (120.02s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop (11.01s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop
start_stop_delete_test.go:272: (dbg) TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:352: "kubernetes-dashboard-855c9754f9-8frzf" [725726d7-6579-4c31-ac88-a1eba642bc33] Pending / Ready:ContainersNotReady (containers with unready status: [kubernetes-dashboard]) / ContainersReady:ContainersNotReady (containers with unready status: [kubernetes-dashboard])
helpers_test.go:352: "kubernetes-dashboard-855c9754f9-8frzf" [725726d7-6579-4c31-ac88-a1eba642bc33] Running
start_stop_delete_test.go:272: (dbg) TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 11.007920567s
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop (11.01s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop (5.1s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop
start_stop_delete_test.go:285: (dbg) TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:352: "kubernetes-dashboard-855c9754f9-8frzf" [725726d7-6579-4c31-ac88-a1eba642bc33] Running
start_stop_delete_test.go:285: (dbg) TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 5.010088231s
start_stop_delete_test.go:289: (dbg) Run:  kubectl --context default-k8s-diff-port-948988 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop (5.10s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/VerifyKubernetesImages (0.25s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/VerifyKubernetesImages
start_stop_delete_test.go:302: (dbg) Run:  out/minikube-linux-amd64 -p default-k8s-diff-port-948988 image list --format=json
start_stop_delete_test.go:302: Found non-minikube image: gcr.io/k8s-minikube/busybox:1.28.4-glibc
start_stop_delete_test.go:302: Found non-minikube image: gcr.io/k8s-minikube/gvisor-addon:2
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/VerifyKubernetesImages (0.25s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop (6.01s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop
start_stop_delete_test.go:272: (dbg) TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:352: "kubernetes-dashboard-855c9754f9-7qzb9" [f7db8344-fd03-4aab-bb8a-86f0efa6f92a] Pending / Ready:ContainersNotReady (containers with unready status: [kubernetes-dashboard]) / ContainersReady:ContainersNotReady (containers with unready status: [kubernetes-dashboard])
helpers_test.go:352: "kubernetes-dashboard-855c9754f9-7qzb9" [f7db8344-fd03-4aab-bb8a-86f0efa6f92a] Running
start_stop_delete_test.go:272: (dbg) TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 6.004493299s
--- PASS: TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop (6.01s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/AddonExistsAfterStop (5.11s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/AddonExistsAfterStop
start_stop_delete_test.go:285: (dbg) TestStartStop/group/embed-certs/serial/AddonExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:352: "kubernetes-dashboard-855c9754f9-7qzb9" [f7db8344-fd03-4aab-bb8a-86f0efa6f92a] Running
start_stop_delete_test.go:285: (dbg) TestStartStop/group/embed-certs/serial/AddonExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 5.005057635s
start_stop_delete_test.go:289: (dbg) Run:  kubectl --context embed-certs-270191 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard
--- PASS: TestStartStop/group/embed-certs/serial/AddonExistsAfterStop (5.11s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/VerifyKubernetesImages (0.23s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/VerifyKubernetesImages
start_stop_delete_test.go:302: (dbg) Run:  out/minikube-linux-amd64 -p embed-certs-270191 image list --format=json
start_stop_delete_test.go:302: Found non-minikube image: gcr.io/k8s-minikube/busybox:1.28.4-glibc
start_stop_delete_test.go:302: Found non-minikube image: gcr.io/k8s-minikube/gvisor-addon:2
--- PASS: TestStartStop/group/embed-certs/serial/VerifyKubernetesImages (0.23s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/Pause (3.89s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/Pause
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-amd64 pause -p embed-certs-270191 --alsologtostderr -v=1
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p embed-certs-270191 -n embed-certs-270191
start_stop_delete_test.go:309: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.APIServer}} -p embed-certs-270191 -n embed-certs-270191: exit status 2 (274.580896ms)

                                                
                                                
-- stdout --
	Paused

                                                
                                                
-- /stdout --
start_stop_delete_test.go:309: status error: exit status 2 (may be ok)
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Kubelet}} -p embed-certs-270191 -n embed-certs-270191
start_stop_delete_test.go:309: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Kubelet}} -p embed-certs-270191 -n embed-certs-270191: exit status 2 (268.748439ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:309: status error: exit status 2 (may be ok)
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-amd64 unpause -p embed-certs-270191 --alsologtostderr -v=1
start_stop_delete_test.go:309: (dbg) Done: out/minikube-linux-amd64 unpause -p embed-certs-270191 --alsologtostderr -v=1: (1.800434576s)
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p embed-certs-270191 -n embed-certs-270191
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Kubelet}} -p embed-certs-270191 -n embed-certs-270191
--- PASS: TestStartStop/group/embed-certs/serial/Pause (3.89s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/Start (75.36s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-amd64 start -p kindnet-720125 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=kindnet --driver=kvm2  --auto-update-drivers=false
net_test.go:112: (dbg) Done: out/minikube-linux-amd64 start -p kindnet-720125 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=kindnet --driver=kvm2  --auto-update-drivers=false: (1m15.361352327s)
--- PASS: TestNetworkPlugins/group/kindnet/Start (75.36s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/Start (98.34s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-amd64 start -p calico-720125 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=calico --driver=kvm2  --auto-update-drivers=false
E1018 12:27:00.824035    9909 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21647-6010/.minikube/profiles/gvisor-073301/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
net_test.go:112: (dbg) Done: out/minikube-linux-amd64 start -p calico-720125 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=calico --driver=kvm2  --auto-update-drivers=false: (1m38.342220628s)
--- PASS: TestNetworkPlugins/group/calico/Start (98.34s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/UserAppExistsAfterStop (0s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/UserAppExistsAfterStop
start_stop_delete_test.go:271: WARNING: cni mode requires additional setup before pods can schedule :(
--- PASS: TestStartStop/group/newest-cni/serial/UserAppExistsAfterStop (0.00s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/AddonExistsAfterStop (0s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/AddonExistsAfterStop
start_stop_delete_test.go:282: WARNING: cni mode requires additional setup before pods can schedule :(
--- PASS: TestStartStop/group/newest-cni/serial/AddonExistsAfterStop (0.00s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/VerifyKubernetesImages (0.3s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/VerifyKubernetesImages
start_stop_delete_test.go:302: (dbg) Run:  out/minikube-linux-amd64 -p newest-cni-661287 image list --format=json
start_stop_delete_test.go:302: Found non-minikube image: gcr.io/k8s-minikube/gvisor-addon:2
--- PASS: TestStartStop/group/newest-cni/serial/VerifyKubernetesImages (0.30s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/Pause (2.91s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/Pause
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-amd64 pause -p newest-cni-661287 --alsologtostderr -v=1
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p newest-cni-661287 -n newest-cni-661287
start_stop_delete_test.go:309: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.APIServer}} -p newest-cni-661287 -n newest-cni-661287: exit status 2 (284.359161ms)

                                                
                                                
-- stdout --
	Paused

                                                
                                                
-- /stdout --
start_stop_delete_test.go:309: status error: exit status 2 (may be ok)
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Kubelet}} -p newest-cni-661287 -n newest-cni-661287
start_stop_delete_test.go:309: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Kubelet}} -p newest-cni-661287 -n newest-cni-661287: exit status 2 (302.336355ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:309: status error: exit status 2 (may be ok)
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-amd64 unpause -p newest-cni-661287 --alsologtostderr -v=1
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p newest-cni-661287 -n newest-cni-661287
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Kubelet}} -p newest-cni-661287 -n newest-cni-661287
--- PASS: TestStartStop/group/newest-cni/serial/Pause (2.91s)

                                                
                                    
x
+
TestNetworkPlugins/group/custom-flannel/Start (96.12s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/custom-flannel/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-amd64 start -p custom-flannel-720125 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=testdata/kube-flannel.yaml --driver=kvm2  --auto-update-drivers=false
E1018 12:27:25.299161    9909 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21647-6010/.minikube/profiles/old-k8s-version-667489/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1018 12:27:25.305658    9909 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21647-6010/.minikube/profiles/old-k8s-version-667489/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1018 12:27:25.317188    9909 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21647-6010/.minikube/profiles/old-k8s-version-667489/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1018 12:27:25.339488    9909 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21647-6010/.minikube/profiles/old-k8s-version-667489/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1018 12:27:25.381828    9909 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21647-6010/.minikube/profiles/old-k8s-version-667489/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1018 12:27:25.463977    9909 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21647-6010/.minikube/profiles/old-k8s-version-667489/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1018 12:27:25.625793    9909 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21647-6010/.minikube/profiles/old-k8s-version-667489/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1018 12:27:25.947394    9909 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21647-6010/.minikube/profiles/old-k8s-version-667489/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1018 12:27:26.588859    9909 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21647-6010/.minikube/profiles/old-k8s-version-667489/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1018 12:27:27.870964    9909 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21647-6010/.minikube/profiles/old-k8s-version-667489/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1018 12:27:28.944511    9909 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21647-6010/.minikube/profiles/addons-886198/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1018 12:27:30.432550    9909 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21647-6010/.minikube/profiles/old-k8s-version-667489/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
net_test.go:112: (dbg) Done: out/minikube-linux-amd64 start -p custom-flannel-720125 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=testdata/kube-flannel.yaml --driver=kvm2  --auto-update-drivers=false: (1m36.118802004s)
--- PASS: TestNetworkPlugins/group/custom-flannel/Start (96.12s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/KubeletFlags (0.56s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-amd64 ssh -p auto-720125 "pgrep -a kubelet"
E1018 12:27:35.553931    9909 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21647-6010/.minikube/profiles/old-k8s-version-667489/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
I1018 12:27:35.704572    9909 config.go:182] Loaded profile config "auto-720125": Driver=kvm2, ContainerRuntime=docker, KubernetesVersion=v1.34.1
--- PASS: TestNetworkPlugins/group/auto/KubeletFlags (0.56s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/NetCatPod (12.47s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context auto-720125 replace --force -f testdata/netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/auto/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:352: "netcat-cd4db9dbf-z647r" [0f977f20-6550-43b8-91cd-342c848108e7] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
helpers_test.go:352: "netcat-cd4db9dbf-z647r" [0f977f20-6550-43b8-91cd-342c848108e7] Running
E1018 12:27:45.796412    9909 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21647-6010/.minikube/profiles/old-k8s-version-667489/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1018 12:27:45.861208    9909 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21647-6010/.minikube/profiles/addons-886198/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
net_test.go:163: (dbg) TestNetworkPlugins/group/auto/NetCatPod: app=netcat healthy within 12.011396111s
--- PASS: TestNetworkPlugins/group/auto/NetCatPod (12.47s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/DNS (0.21s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/DNS
net_test.go:175: (dbg) Run:  kubectl --context auto-720125 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/auto/DNS (0.21s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/Localhost (0.16s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/Localhost
net_test.go:194: (dbg) Run:  kubectl --context auto-720125 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/auto/Localhost (0.16s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/HairPin (0.16s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/HairPin
net_test.go:264: (dbg) Run:  kubectl --context auto-720125 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/auto/HairPin (0.16s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/ControllerPod (6.01s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/ControllerPod
net_test.go:120: (dbg) TestNetworkPlugins/group/kindnet/ControllerPod: waiting 10m0s for pods matching "app=kindnet" in namespace "kube-system" ...
helpers_test.go:352: "kindnet-2z5k4" [71ae895a-ba09-475a-b91d-d02350349dc2] Running
net_test.go:120: (dbg) TestNetworkPlugins/group/kindnet/ControllerPod: app=kindnet healthy within 6.006442048s
--- PASS: TestNetworkPlugins/group/kindnet/ControllerPod (6.01s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/KubeletFlags (0.26s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-amd64 ssh -p kindnet-720125 "pgrep -a kubelet"
I1018 12:28:01.223993    9909 config.go:182] Loaded profile config "kindnet-720125": Driver=kvm2, ContainerRuntime=docker, KubernetesVersion=v1.34.1
--- PASS: TestNetworkPlugins/group/kindnet/KubeletFlags (0.26s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/NetCatPod (12.33s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context kindnet-720125 replace --force -f testdata/netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/kindnet/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:352: "netcat-cd4db9dbf-vnv47" [2e4bdb47-1935-4493-b8ed-e33d81a221bd] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
helpers_test.go:352: "netcat-cd4db9dbf-vnv47" [2e4bdb47-1935-4493-b8ed-e33d81a221bd] Running
net_test.go:163: (dbg) TestNetworkPlugins/group/kindnet/NetCatPod: app=netcat healthy within 12.007403631s
--- PASS: TestNetworkPlugins/group/kindnet/NetCatPod (12.33s)

                                                
                                    
x
+
TestNetworkPlugins/group/false/Start (92.93s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/false/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-amd64 start -p false-720125 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=false --driver=kvm2  --auto-update-drivers=false
net_test.go:112: (dbg) Done: out/minikube-linux-amd64 start -p false-720125 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=false --driver=kvm2  --auto-update-drivers=false: (1m32.92788201s)
--- PASS: TestNetworkPlugins/group/false/Start (92.93s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/DNS (0.22s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/DNS
net_test.go:175: (dbg) Run:  kubectl --context kindnet-720125 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/kindnet/DNS (0.22s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/Localhost (0.18s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/Localhost
net_test.go:194: (dbg) Run:  kubectl --context kindnet-720125 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/kindnet/Localhost (0.18s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/HairPin (0.19s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/HairPin
net_test.go:264: (dbg) Run:  kubectl --context kindnet-720125 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/kindnet/HairPin (0.19s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/Start (90.84s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-amd64 start -p enable-default-cni-720125 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --enable-default-cni=true --driver=kvm2  --auto-update-drivers=false
net_test.go:112: (dbg) Done: out/minikube-linux-amd64 start -p enable-default-cni-720125 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --enable-default-cni=true --driver=kvm2  --auto-update-drivers=false: (1m30.8381947s)
--- PASS: TestNetworkPlugins/group/enable-default-cni/Start (90.84s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/ControllerPod (6.01s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/ControllerPod
net_test.go:120: (dbg) TestNetworkPlugins/group/calico/ControllerPod: waiting 10m0s for pods matching "k8s-app=calico-node" in namespace "kube-system" ...
helpers_test.go:352: "calico-node-gsvl2" [0d68e861-1096-4fdd-9948-0748a038981a] Running / Ready:ContainersNotReady (containers with unready status: [calico-node]) / ContainersReady:ContainersNotReady (containers with unready status: [calico-node])
helpers_test.go:352: "calico-node-gsvl2" [0d68e861-1096-4fdd-9948-0748a038981a] Running
net_test.go:120: (dbg) TestNetworkPlugins/group/calico/ControllerPod: k8s-app=calico-node healthy within 6.005979269s
--- PASS: TestNetworkPlugins/group/calico/ControllerPod (6.01s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/KubeletFlags (0.24s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-amd64 ssh -p calico-720125 "pgrep -a kubelet"
I1018 12:28:40.386895    9909 config.go:182] Loaded profile config "calico-720125": Driver=kvm2, ContainerRuntime=docker, KubernetesVersion=v1.34.1
--- PASS: TestNetworkPlugins/group/calico/KubeletFlags (0.24s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/NetCatPod (14.32s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context calico-720125 replace --force -f testdata/netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/calico/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:352: "netcat-cd4db9dbf-ln87h" [2a567305-cb86-4cc7-a529-2953c9da17d2] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
helpers_test.go:352: "netcat-cd4db9dbf-ln87h" [2a567305-cb86-4cc7-a529-2953c9da17d2] Running
E1018 12:28:53.939170    9909 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21647-6010/.minikube/profiles/no-preload-839073/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1018 12:28:53.945654    9909 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21647-6010/.minikube/profiles/no-preload-839073/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1018 12:28:53.957188    9909 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21647-6010/.minikube/profiles/no-preload-839073/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1018 12:28:53.978689    9909 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21647-6010/.minikube/profiles/no-preload-839073/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1018 12:28:54.020187    9909 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21647-6010/.minikube/profiles/no-preload-839073/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1018 12:28:54.101715    9909 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21647-6010/.minikube/profiles/no-preload-839073/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1018 12:28:54.263921    9909 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21647-6010/.minikube/profiles/no-preload-839073/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
net_test.go:163: (dbg) TestNetworkPlugins/group/calico/NetCatPod: app=netcat healthy within 14.005531998s
--- PASS: TestNetworkPlugins/group/calico/NetCatPod (14.32s)

                                                
                                    
x
+
TestNetworkPlugins/group/custom-flannel/KubeletFlags (0.23s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/custom-flannel/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-amd64 ssh -p custom-flannel-720125 "pgrep -a kubelet"
I1018 12:28:42.173242    9909 config.go:182] Loaded profile config "custom-flannel-720125": Driver=kvm2, ContainerRuntime=docker, KubernetesVersion=v1.34.1
--- PASS: TestNetworkPlugins/group/custom-flannel/KubeletFlags (0.23s)

                                                
                                    
x
+
TestNetworkPlugins/group/custom-flannel/NetCatPod (12.29s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/custom-flannel/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context custom-flannel-720125 replace --force -f testdata/netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/custom-flannel/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:352: "netcat-cd4db9dbf-m545m" [c5de239a-457d-4839-a34c-6c983a511640] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
E1018 12:28:47.240225    9909 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21647-6010/.minikube/profiles/old-k8s-version-667489/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
helpers_test.go:352: "netcat-cd4db9dbf-m545m" [c5de239a-457d-4839-a34c-6c983a511640] Running
net_test.go:163: (dbg) TestNetworkPlugins/group/custom-flannel/NetCatPod: app=netcat healthy within 12.006045836s
--- PASS: TestNetworkPlugins/group/custom-flannel/NetCatPod (12.29s)

                                                
                                    
x
+
TestNetworkPlugins/group/custom-flannel/DNS (0.22s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/custom-flannel/DNS
net_test.go:175: (dbg) Run:  kubectl --context custom-flannel-720125 exec deployment/netcat -- nslookup kubernetes.default
E1018 12:28:54.586167    9909 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21647-6010/.minikube/profiles/no-preload-839073/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
--- PASS: TestNetworkPlugins/group/custom-flannel/DNS (0.22s)

                                                
                                    
x
+
TestNetworkPlugins/group/custom-flannel/Localhost (0.17s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/custom-flannel/Localhost
net_test.go:194: (dbg) Run:  kubectl --context custom-flannel-720125 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/custom-flannel/Localhost (0.17s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/DNS (0.21s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/DNS
net_test.go:175: (dbg) Run:  kubectl --context calico-720125 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/calico/DNS (0.21s)

                                                
                                    
x
+
TestNetworkPlugins/group/custom-flannel/HairPin (0.17s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/custom-flannel/HairPin
net_test.go:264: (dbg) Run:  kubectl --context custom-flannel-720125 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/custom-flannel/HairPin (0.17s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/Localhost (0.2s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/Localhost
net_test.go:194: (dbg) Run:  kubectl --context calico-720125 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/calico/Localhost (0.20s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/HairPin (0.19s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/HairPin
net_test.go:264: (dbg) Run:  kubectl --context calico-720125 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
E1018 12:28:55.228087    9909 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21647-6010/.minikube/profiles/no-preload-839073/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
--- PASS: TestNetworkPlugins/group/calico/HairPin (0.19s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/Start (70.08s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-amd64 start -p flannel-720125 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=flannel --driver=kvm2  --auto-update-drivers=false
E1018 12:29:14.436070    9909 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21647-6010/.minikube/profiles/no-preload-839073/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
net_test.go:112: (dbg) Done: out/minikube-linux-amd64 start -p flannel-720125 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=flannel --driver=kvm2  --auto-update-drivers=false: (1m10.079413026s)
--- PASS: TestNetworkPlugins/group/flannel/Start (70.08s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/Start (115s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-amd64 start -p bridge-720125 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=bridge --driver=kvm2  --auto-update-drivers=false
E1018 12:29:34.918415    9909 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21647-6010/.minikube/profiles/no-preload-839073/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1018 12:29:35.845943    9909 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21647-6010/.minikube/profiles/functional-897621/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
net_test.go:112: (dbg) Done: out/minikube-linux-amd64 start -p bridge-720125 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=bridge --driver=kvm2  --auto-update-drivers=false: (1m54.996714521s)
--- PASS: TestNetworkPlugins/group/bridge/Start (115.00s)

                                                
                                    
x
+
TestNetworkPlugins/group/false/KubeletFlags (0.28s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/false/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-amd64 ssh -p false-720125 "pgrep -a kubelet"
I1018 12:29:40.186527    9909 config.go:182] Loaded profile config "false-720125": Driver=kvm2, ContainerRuntime=docker, KubernetesVersion=v1.34.1
--- PASS: TestNetworkPlugins/group/false/KubeletFlags (0.28s)

                                                
                                    
x
+
TestNetworkPlugins/group/false/NetCatPod (13.33s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/false/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context false-720125 replace --force -f testdata/netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/false/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:352: "netcat-cd4db9dbf-89f6z" [6df21cea-9653-44ac-a865-e9468a87ac22] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
E1018 12:29:48.170653    9909 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21647-6010/.minikube/profiles/default-k8s-diff-port-948988/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1018 12:29:48.177226    9909 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21647-6010/.minikube/profiles/default-k8s-diff-port-948988/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1018 12:29:48.188704    9909 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21647-6010/.minikube/profiles/default-k8s-diff-port-948988/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1018 12:29:48.210213    9909 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21647-6010/.minikube/profiles/default-k8s-diff-port-948988/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1018 12:29:48.251725    9909 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21647-6010/.minikube/profiles/default-k8s-diff-port-948988/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1018 12:29:48.333264    9909 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21647-6010/.minikube/profiles/default-k8s-diff-port-948988/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1018 12:29:48.494789    9909 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21647-6010/.minikube/profiles/default-k8s-diff-port-948988/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
helpers_test.go:352: "netcat-cd4db9dbf-89f6z" [6df21cea-9653-44ac-a865-e9468a87ac22] Running
E1018 12:29:48.816807    9909 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21647-6010/.minikube/profiles/default-k8s-diff-port-948988/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1018 12:29:49.458521    9909 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21647-6010/.minikube/profiles/default-k8s-diff-port-948988/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1018 12:29:49.462000    9909 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21647-6010/.minikube/profiles/skaffold-681264/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1018 12:29:50.740543    9909 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21647-6010/.minikube/profiles/default-k8s-diff-port-948988/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1018 12:29:53.302124    9909 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21647-6010/.minikube/profiles/default-k8s-diff-port-948988/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
net_test.go:163: (dbg) TestNetworkPlugins/group/false/NetCatPod: app=netcat healthy within 13.004702522s
--- PASS: TestNetworkPlugins/group/false/NetCatPod (13.33s)

                                                
                                    
x
+
TestNetworkPlugins/group/false/DNS (0.19s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/false/DNS
net_test.go:175: (dbg) Run:  kubectl --context false-720125 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/false/DNS (0.19s)

                                                
                                    
x
+
TestNetworkPlugins/group/false/Localhost (0.15s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/false/Localhost
net_test.go:194: (dbg) Run:  kubectl --context false-720125 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/false/Localhost (0.15s)

                                                
                                    
x
+
TestNetworkPlugins/group/false/HairPin (0.15s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/false/HairPin
net_test.go:264: (dbg) Run:  kubectl --context false-720125 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/false/HairPin (0.15s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/KubeletFlags (0.29s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-amd64 ssh -p enable-default-cni-720125 "pgrep -a kubelet"
I1018 12:30:03.553583    9909 config.go:182] Loaded profile config "enable-default-cni-720125": Driver=kvm2, ContainerRuntime=docker, KubernetesVersion=v1.34.1
--- PASS: TestNetworkPlugins/group/enable-default-cni/KubeletFlags (0.29s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/NetCatPod (13.36s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context enable-default-cni-720125 replace --force -f testdata/netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/enable-default-cni/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:352: "netcat-cd4db9dbf-fb8sv" [883c001c-6a53-4029-b37a-604fdaed9847] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
E1018 12:30:08.666457    9909 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21647-6010/.minikube/profiles/default-k8s-diff-port-948988/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1018 12:30:09.162172    9909 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21647-6010/.minikube/profiles/old-k8s-version-667489/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
helpers_test.go:352: "netcat-cd4db9dbf-fb8sv" [883c001c-6a53-4029-b37a-604fdaed9847] Running
net_test.go:163: (dbg) TestNetworkPlugins/group/enable-default-cni/NetCatPod: app=netcat healthy within 13.006754769s
--- PASS: TestNetworkPlugins/group/enable-default-cni/NetCatPod (13.36s)

                                                
                                    
x
+
TestNetworkPlugins/group/kubenet/Start (86.2s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kubenet/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-amd64 start -p kubenet-720125 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --network-plugin=kubenet --driver=kvm2  --auto-update-drivers=false
E1018 12:30:15.880359    9909 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21647-6010/.minikube/profiles/no-preload-839073/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
net_test.go:112: (dbg) Done: out/minikube-linux-amd64 start -p kubenet-720125 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --network-plugin=kubenet --driver=kvm2  --auto-update-drivers=false: (1m26.1976449s)
--- PASS: TestNetworkPlugins/group/kubenet/Start (86.20s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/DNS (0.22s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/DNS
net_test.go:175: (dbg) Run:  kubectl --context enable-default-cni-720125 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/enable-default-cni/DNS (0.22s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/Localhost (0.18s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/Localhost
net_test.go:194: (dbg) Run:  kubectl --context enable-default-cni-720125 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/enable-default-cni/Localhost (0.18s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/HairPin (0.18s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/HairPin
net_test.go:264: (dbg) Run:  kubectl --context enable-default-cni-720125 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/enable-default-cni/HairPin (0.18s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/ControllerPod (6.01s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/ControllerPod
net_test.go:120: (dbg) TestNetworkPlugins/group/flannel/ControllerPod: waiting 10m0s for pods matching "app=flannel" in namespace "kube-flannel" ...
helpers_test.go:352: "kube-flannel-ds-gfcm2" [599b790a-44f6-4a10-ac66-59b2d74da129] Running
E1018 12:30:29.148428    9909 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21647-6010/.minikube/profiles/default-k8s-diff-port-948988/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
net_test.go:120: (dbg) TestNetworkPlugins/group/flannel/ControllerPod: app=flannel healthy within 6.005111447s
--- PASS: TestNetworkPlugins/group/flannel/ControllerPod (6.01s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/KubeletFlags (0.25s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-amd64 ssh -p flannel-720125 "pgrep -a kubelet"
I1018 12:30:30.243864    9909 config.go:182] Loaded profile config "flannel-720125": Driver=kvm2, ContainerRuntime=docker, KubernetesVersion=v1.34.1
--- PASS: TestNetworkPlugins/group/flannel/KubeletFlags (0.25s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/NetCatPod (12.45s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context flannel-720125 replace --force -f testdata/netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/flannel/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:352: "netcat-cd4db9dbf-2krbh" [477c19a8-f880-443d-a757-ee6421589b94] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
helpers_test.go:352: "netcat-cd4db9dbf-2krbh" [477c19a8-f880-443d-a757-ee6421589b94] Running
net_test.go:163: (dbg) TestNetworkPlugins/group/flannel/NetCatPod: app=netcat healthy within 12.151118218s
--- PASS: TestNetworkPlugins/group/flannel/NetCatPod (12.45s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/DNS (0.37s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/DNS
net_test.go:175: (dbg) Run:  kubectl --context flannel-720125 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/flannel/DNS (0.37s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/Localhost (0.15s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/Localhost
net_test.go:194: (dbg) Run:  kubectl --context flannel-720125 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/flannel/Localhost (0.15s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/HairPin (0.15s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/HairPin
net_test.go:264: (dbg) Run:  kubectl --context flannel-720125 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/flannel/HairPin (0.15s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/KubeletFlags (0.22s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-amd64 ssh -p bridge-720125 "pgrep -a kubelet"
I1018 12:31:09.988618    9909 config.go:182] Loaded profile config "bridge-720125": Driver=kvm2, ContainerRuntime=docker, KubernetesVersion=v1.34.1
--- PASS: TestNetworkPlugins/group/bridge/KubeletFlags (0.22s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/NetCatPod (11.28s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context bridge-720125 replace --force -f testdata/netcat-deployment.yaml
E1018 12:31:10.110266    9909 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21647-6010/.minikube/profiles/default-k8s-diff-port-948988/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
net_test.go:163: (dbg) TestNetworkPlugins/group/bridge/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:352: "netcat-cd4db9dbf-tggnw" [503ff905-b028-45b7-8662-0cb54b77052d] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
helpers_test.go:352: "netcat-cd4db9dbf-tggnw" [503ff905-b028-45b7-8662-0cb54b77052d] Running
net_test.go:163: (dbg) TestNetworkPlugins/group/bridge/NetCatPod: app=netcat healthy within 11.003603533s
--- PASS: TestNetworkPlugins/group/bridge/NetCatPod (11.28s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/DNS (0.16s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/DNS
net_test.go:175: (dbg) Run:  kubectl --context bridge-720125 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/bridge/DNS (0.16s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/Localhost (0.14s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/Localhost
net_test.go:194: (dbg) Run:  kubectl --context bridge-720125 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/bridge/Localhost (0.14s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/HairPin (0.14s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/HairPin
net_test.go:264: (dbg) Run:  kubectl --context bridge-720125 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/bridge/HairPin (0.14s)

                                                
                                    
x
+
TestNetworkPlugins/group/kubenet/KubeletFlags (0.22s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kubenet/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-amd64 ssh -p kubenet-720125 "pgrep -a kubelet"
I1018 12:31:38.710402    9909 config.go:182] Loaded profile config "kubenet-720125": Driver=kvm2, ContainerRuntime=docker, KubernetesVersion=v1.34.1
--- PASS: TestNetworkPlugins/group/kubenet/KubeletFlags (0.22s)

                                                
                                    
x
+
TestNetworkPlugins/group/kubenet/NetCatPod (11.22s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kubenet/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context kubenet-720125 replace --force -f testdata/netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/kubenet/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:352: "netcat-cd4db9dbf-hjmlf" [9edba5a9-5148-42c3-883f-58950b1539fe] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
helpers_test.go:352: "netcat-cd4db9dbf-hjmlf" [9edba5a9-5148-42c3-883f-58950b1539fe] Running
net_test.go:163: (dbg) TestNetworkPlugins/group/kubenet/NetCatPod: app=netcat healthy within 11.004110279s
--- PASS: TestNetworkPlugins/group/kubenet/NetCatPod (11.22s)

                                                
                                    
x
+
TestNetworkPlugins/group/kubenet/DNS (0.19s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kubenet/DNS
net_test.go:175: (dbg) Run:  kubectl --context kubenet-720125 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/kubenet/DNS (0.19s)

                                                
                                    
x
+
TestNetworkPlugins/group/kubenet/Localhost (0.13s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kubenet/Localhost
net_test.go:194: (dbg) Run:  kubectl --context kubenet-720125 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/kubenet/Localhost (0.13s)

                                                
                                    
x
+
TestNetworkPlugins/group/kubenet/HairPin (0.13s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kubenet/HairPin
net_test.go:264: (dbg) Run:  kubectl --context kubenet-720125 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/kubenet/HairPin (0.13s)

                                                
                                    

Test skip (34/345)

Order skiped test Duration
5 TestDownloadOnly/v1.28.0/cached-images 0
6 TestDownloadOnly/v1.28.0/binaries 0
7 TestDownloadOnly/v1.28.0/kubectl 0
14 TestDownloadOnly/v1.34.1/cached-images 0
15 TestDownloadOnly/v1.34.1/binaries 0
16 TestDownloadOnly/v1.34.1/kubectl 0
20 TestDownloadOnlyKic 0
33 TestAddons/serial/GCPAuth/RealCredentials 0
40 TestAddons/parallel/Olm 0
47 TestAddons/parallel/AmdGpuDevicePlugin 0
54 TestDockerEnvContainerd 0
56 TestHyperKitDriverInstallOrUpdate 0
57 TestHyperkitDriverSkipUpgrade 0
109 TestFunctional/parallel/PodmanEnv 0
117 TestFunctional/parallel/TunnelCmd/serial/RunSecondTunnel 0.01
118 TestFunctional/parallel/TunnelCmd/serial/StartTunnel 0.01
119 TestFunctional/parallel/TunnelCmd/serial/WaitService 0.01
120 TestFunctional/parallel/TunnelCmd/serial/AccessDirect 0.01
121 TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDig 0.01
122 TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDscacheutil 0.01
123 TestFunctional/parallel/TunnelCmd/serial/AccessThroughDNS 0.01
124 TestFunctional/parallel/TunnelCmd/serial/DeleteTunnel 0.01
158 TestFunctionalNewestKubernetes 0
188 TestImageBuild/serial/validateImageBuildWithBuildEnv 0
215 TestKicCustomNetwork 0
216 TestKicExistingNetwork 0
217 TestKicCustomSubnet 0
218 TestKicStaticIP 0
250 TestChangeNoneUser 0
253 TestScheduledStopWindows 0
257 TestInsufficientStorage 0
261 TestMissingContainerUpgrade 0
267 TestStartStop/group/disable-driver-mounts 0.17
287 TestNetworkPlugins/group/cilium 5.46
x
+
TestDownloadOnly/v1.28.0/cached-images (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.28.0/cached-images
aaa_download_only_test.go:128: Preload exists, images won't be cached
--- SKIP: TestDownloadOnly/v1.28.0/cached-images (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.28.0/binaries (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.28.0/binaries
aaa_download_only_test.go:150: Preload exists, binaries are present within.
--- SKIP: TestDownloadOnly/v1.28.0/binaries (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.28.0/kubectl (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.28.0/kubectl
aaa_download_only_test.go:166: Test for darwin and windows
--- SKIP: TestDownloadOnly/v1.28.0/kubectl (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.34.1/cached-images (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.34.1/cached-images
aaa_download_only_test.go:128: Preload exists, images won't be cached
--- SKIP: TestDownloadOnly/v1.34.1/cached-images (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.34.1/binaries (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.34.1/binaries
aaa_download_only_test.go:150: Preload exists, binaries are present within.
--- SKIP: TestDownloadOnly/v1.34.1/binaries (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.34.1/kubectl (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.34.1/kubectl
aaa_download_only_test.go:166: Test for darwin and windows
--- SKIP: TestDownloadOnly/v1.34.1/kubectl (0.00s)

                                                
                                    
x
+
TestDownloadOnlyKic (0s)

                                                
                                                
=== RUN   TestDownloadOnlyKic
aaa_download_only_test.go:219: skipping, only for docker or podman driver
--- SKIP: TestDownloadOnlyKic (0.00s)

                                                
                                    
x
+
TestAddons/serial/GCPAuth/RealCredentials (0s)

                                                
                                                
=== RUN   TestAddons/serial/GCPAuth/RealCredentials
addons_test.go:759: This test requires a GCE instance (excluding Cloud Shell) with a container based driver
--- SKIP: TestAddons/serial/GCPAuth/RealCredentials (0.00s)

                                                
                                    
x
+
TestAddons/parallel/Olm (0s)

                                                
                                                
=== RUN   TestAddons/parallel/Olm
=== PAUSE TestAddons/parallel/Olm

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/Olm
addons_test.go:483: Skipping OLM addon test until https://github.com/operator-framework/operator-lifecycle-manager/issues/2534 is resolved
--- SKIP: TestAddons/parallel/Olm (0.00s)

                                                
                                    
x
+
TestAddons/parallel/AmdGpuDevicePlugin (0s)

                                                
                                                
=== RUN   TestAddons/parallel/AmdGpuDevicePlugin
=== PAUSE TestAddons/parallel/AmdGpuDevicePlugin

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/AmdGpuDevicePlugin
addons_test.go:1033: skip amd gpu test on all but docker driver and amd64 platform
--- SKIP: TestAddons/parallel/AmdGpuDevicePlugin (0.00s)

                                                
                                    
x
+
TestDockerEnvContainerd (0s)

                                                
                                                
=== RUN   TestDockerEnvContainerd
docker_test.go:170: running with docker false linux amd64
docker_test.go:172: skipping: TestDockerEnvContainerd can only be run with the containerd runtime on Docker driver
--- SKIP: TestDockerEnvContainerd (0.00s)

                                                
                                    
x
+
TestHyperKitDriverInstallOrUpdate (0s)

                                                
                                                
=== RUN   TestHyperKitDriverInstallOrUpdate
driver_install_or_update_test.go:114: Skip if not darwin.
--- SKIP: TestHyperKitDriverInstallOrUpdate (0.00s)

                                                
                                    
x
+
TestHyperkitDriverSkipUpgrade (0s)

                                                
                                                
=== RUN   TestHyperkitDriverSkipUpgrade
driver_install_or_update_test.go:178: Skip if not darwin.
--- SKIP: TestHyperkitDriverSkipUpgrade (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/PodmanEnv (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/PodmanEnv
=== PAUSE TestFunctional/parallel/PodmanEnv

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/PodmanEnv
functional_test.go:565: only validate podman env with docker container runtime, currently testing docker
--- SKIP: TestFunctional/parallel/PodmanEnv (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/RunSecondTunnel (0.01s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/RunSecondTunnel
functional_test_tunnel_test.go:90: password required to execute 'route', skipping testTunnel: exit status 1
--- SKIP: TestFunctional/parallel/TunnelCmd/serial/RunSecondTunnel (0.01s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/StartTunnel (0.01s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/StartTunnel
functional_test_tunnel_test.go:90: password required to execute 'route', skipping testTunnel: exit status 1
--- SKIP: TestFunctional/parallel/TunnelCmd/serial/StartTunnel (0.01s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/WaitService (0.01s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/WaitService
functional_test_tunnel_test.go:90: password required to execute 'route', skipping testTunnel: exit status 1
--- SKIP: TestFunctional/parallel/TunnelCmd/serial/WaitService (0.01s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/AccessDirect (0.01s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/AccessDirect
functional_test_tunnel_test.go:90: password required to execute 'route', skipping testTunnel: exit status 1
--- SKIP: TestFunctional/parallel/TunnelCmd/serial/AccessDirect (0.01s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDig (0.01s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDig
functional_test_tunnel_test.go:90: password required to execute 'route', skipping testTunnel: exit status 1
--- SKIP: TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDig (0.01s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDscacheutil (0.01s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDscacheutil
functional_test_tunnel_test.go:90: password required to execute 'route', skipping testTunnel: exit status 1
--- SKIP: TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDscacheutil (0.01s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/AccessThroughDNS (0.01s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/AccessThroughDNS
functional_test_tunnel_test.go:90: password required to execute 'route', skipping testTunnel: exit status 1
--- SKIP: TestFunctional/parallel/TunnelCmd/serial/AccessThroughDNS (0.01s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/DeleteTunnel (0.01s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/DeleteTunnel
functional_test_tunnel_test.go:90: password required to execute 'route', skipping testTunnel: exit status 1
--- SKIP: TestFunctional/parallel/TunnelCmd/serial/DeleteTunnel (0.01s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes (0s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes
functional_test.go:82: 
--- SKIP: TestFunctionalNewestKubernetes (0.00s)

                                                
                                    
x
+
TestImageBuild/serial/validateImageBuildWithBuildEnv (0s)

                                                
                                                
=== RUN   TestImageBuild/serial/validateImageBuildWithBuildEnv
image_test.go:114: skipping due to https://github.com/kubernetes/minikube/issues/12431
--- SKIP: TestImageBuild/serial/validateImageBuildWithBuildEnv (0.00s)

                                                
                                    
x
+
TestKicCustomNetwork (0s)

                                                
                                                
=== RUN   TestKicCustomNetwork
kic_custom_network_test.go:34: only runs with docker driver
--- SKIP: TestKicCustomNetwork (0.00s)

                                                
                                    
x
+
TestKicExistingNetwork (0s)

                                                
                                                
=== RUN   TestKicExistingNetwork
kic_custom_network_test.go:73: only runs with docker driver
--- SKIP: TestKicExistingNetwork (0.00s)

                                                
                                    
x
+
TestKicCustomSubnet (0s)

                                                
                                                
=== RUN   TestKicCustomSubnet
kic_custom_network_test.go:102: only runs with docker/podman driver
--- SKIP: TestKicCustomSubnet (0.00s)

                                                
                                    
x
+
TestKicStaticIP (0s)

                                                
                                                
=== RUN   TestKicStaticIP
kic_custom_network_test.go:123: only run with docker/podman driver
--- SKIP: TestKicStaticIP (0.00s)

                                                
                                    
x
+
TestChangeNoneUser (0s)

                                                
                                                
=== RUN   TestChangeNoneUser
none_test.go:38: Test requires none driver and SUDO_USER env to not be empty
--- SKIP: TestChangeNoneUser (0.00s)

                                                
                                    
x
+
TestScheduledStopWindows (0s)

                                                
                                                
=== RUN   TestScheduledStopWindows
scheduled_stop_test.go:42: test only runs on windows
--- SKIP: TestScheduledStopWindows (0.00s)

                                                
                                    
x
+
TestInsufficientStorage (0s)

                                                
                                                
=== RUN   TestInsufficientStorage
status_test.go:38: only runs with docker driver
--- SKIP: TestInsufficientStorage (0.00s)

                                                
                                    
x
+
TestMissingContainerUpgrade (0s)

                                                
                                                
=== RUN   TestMissingContainerUpgrade
version_upgrade_test.go:284: This test is only for Docker
--- SKIP: TestMissingContainerUpgrade (0.00s)

                                                
                                    
x
+
TestStartStop/group/disable-driver-mounts (0.17s)

                                                
                                                
=== RUN   TestStartStop/group/disable-driver-mounts
=== PAUSE TestStartStop/group/disable-driver-mounts

                                                
                                                

                                                
                                                
=== CONT  TestStartStop/group/disable-driver-mounts
start_stop_delete_test.go:101: skipping TestStartStop/group/disable-driver-mounts - only runs on virtualbox
helpers_test.go:175: Cleaning up "disable-driver-mounts-867265" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p disable-driver-mounts-867265
--- SKIP: TestStartStop/group/disable-driver-mounts (0.17s)

                                                
                                    
x
+
TestNetworkPlugins/group/cilium (5.46s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/cilium
net_test.go:102: Skipping the test as it's interfering with other tests and is outdated
E1018 12:20:30.442410    9909 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21647-6010/.minikube/profiles/skaffold-681264/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
panic.go:636: 
----------------------- debugLogs start: cilium-720125 [pass: true] --------------------------------
>>> netcat: nslookup kubernetes.default:
Error in configuration: context was not found for specified context: cilium-720125

                                                
                                                

                                                
                                                
>>> netcat: nslookup debug kubernetes.default a-records:
Error in configuration: context was not found for specified context: cilium-720125

                                                
                                                

                                                
                                                
>>> netcat: dig search kubernetes.default:
Error in configuration: context was not found for specified context: cilium-720125

                                                
                                                

                                                
                                                
>>> netcat: dig @10.96.0.10 kubernetes.default.svc.cluster.local udp/53:
Error in configuration: context was not found for specified context: cilium-720125

                                                
                                                

                                                
                                                
>>> netcat: dig @10.96.0.10 kubernetes.default.svc.cluster.local tcp/53:
Error in configuration: context was not found for specified context: cilium-720125

                                                
                                                

                                                
                                                
>>> netcat: nc 10.96.0.10 udp/53:
Error in configuration: context was not found for specified context: cilium-720125

                                                
                                                

                                                
                                                
>>> netcat: nc 10.96.0.10 tcp/53:
Error in configuration: context was not found for specified context: cilium-720125

                                                
                                                

                                                
                                                
>>> netcat: /etc/nsswitch.conf:
Error in configuration: context was not found for specified context: cilium-720125

                                                
                                                

                                                
                                                
>>> netcat: /etc/hosts:
Error in configuration: context was not found for specified context: cilium-720125

                                                
                                                

                                                
                                                
>>> netcat: /etc/resolv.conf:
Error in configuration: context was not found for specified context: cilium-720125

                                                
                                                

                                                
                                                
>>> host: /etc/nsswitch.conf:
* Profile "cilium-720125" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-720125"

                                                
                                                

                                                
                                                
>>> host: /etc/hosts:
* Profile "cilium-720125" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-720125"

                                                
                                                

                                                
                                                
>>> host: /etc/resolv.conf:
* Profile "cilium-720125" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-720125"

                                                
                                                

                                                
                                                
>>> k8s: nodes, services, endpoints, daemon sets, deployments and pods, :
Error in configuration: context was not found for specified context: cilium-720125

                                                
                                                

                                                
                                                
>>> host: crictl pods:
* Profile "cilium-720125" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-720125"

                                                
                                                

                                                
                                                
>>> host: crictl containers:
* Profile "cilium-720125" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-720125"

                                                
                                                

                                                
                                                
>>> k8s: describe netcat deployment:
error: context "cilium-720125" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe netcat pod(s):
error: context "cilium-720125" does not exist

                                                
                                                

                                                
                                                
>>> k8s: netcat logs:
error: context "cilium-720125" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe coredns deployment:
error: context "cilium-720125" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe coredns pods:
error: context "cilium-720125" does not exist

                                                
                                                

                                                
                                                
>>> k8s: coredns logs:
error: context "cilium-720125" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe api server pod(s):
error: context "cilium-720125" does not exist

                                                
                                                

                                                
                                                
>>> k8s: api server logs:
error: context "cilium-720125" does not exist

                                                
                                                

                                                
                                                
>>> host: /etc/cni:
* Profile "cilium-720125" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-720125"

                                                
                                                

                                                
                                                
>>> host: ip a s:
* Profile "cilium-720125" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-720125"

                                                
                                                

                                                
                                                
>>> host: ip r s:
* Profile "cilium-720125" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-720125"

                                                
                                                

                                                
                                                
>>> host: iptables-save:
* Profile "cilium-720125" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-720125"

                                                
                                                

                                                
                                                
>>> host: iptables table nat:
* Profile "cilium-720125" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-720125"

                                                
                                                

                                                
                                                
>>> k8s: describe cilium daemon set:
Error in configuration: context was not found for specified context: cilium-720125

                                                
                                                

                                                
                                                
>>> k8s: describe cilium daemon set pod(s):
Error in configuration: context was not found for specified context: cilium-720125

                                                
                                                

                                                
                                                
>>> k8s: cilium daemon set container(s) logs (current):
error: context "cilium-720125" does not exist

                                                
                                                

                                                
                                                
>>> k8s: cilium daemon set container(s) logs (previous):
error: context "cilium-720125" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe cilium deployment:
Error in configuration: context was not found for specified context: cilium-720125

                                                
                                                

                                                
                                                
>>> k8s: describe cilium deployment pod(s):
Error in configuration: context was not found for specified context: cilium-720125

                                                
                                                

                                                
                                                
>>> k8s: cilium deployment container(s) logs (current):
error: context "cilium-720125" does not exist

                                                
                                                

                                                
                                                
>>> k8s: cilium deployment container(s) logs (previous):
error: context "cilium-720125" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe kube-proxy daemon set:
error: context "cilium-720125" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe kube-proxy pod(s):
error: context "cilium-720125" does not exist

                                                
                                                

                                                
                                                
>>> k8s: kube-proxy logs:
error: context "cilium-720125" does not exist

                                                
                                                

                                                
                                                
>>> host: kubelet daemon status:
* Profile "cilium-720125" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-720125"

                                                
                                                

                                                
                                                
>>> host: kubelet daemon config:
* Profile "cilium-720125" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-720125"

                                                
                                                

                                                
                                                
>>> k8s: kubelet logs:
* Profile "cilium-720125" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-720125"

                                                
                                                

                                                
                                                
>>> host: /etc/kubernetes/kubelet.conf:
* Profile "cilium-720125" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-720125"

                                                
                                                

                                                
                                                
>>> host: /var/lib/kubelet/config.yaml:
* Profile "cilium-720125" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-720125"

                                                
                                                

                                                
                                                
>>> k8s: kubectl config:
apiVersion: v1
clusters:
- cluster:
certificate-authority: /home/jenkins/minikube-integration/21647-6010/.minikube/ca.crt
extensions:
- extension:
last-update: Sat, 18 Oct 2025 12:19:37 UTC
provider: minikube.sigs.k8s.io
version: v1.37.0
name: cluster_info
server: https://192.168.61.188:8443
name: cert-expiration-550750
contexts:
- context:
cluster: cert-expiration-550750
extensions:
- extension:
last-update: Sat, 18 Oct 2025 12:19:37 UTC
provider: minikube.sigs.k8s.io
version: v1.37.0
name: context_info
namespace: default
user: cert-expiration-550750
name: cert-expiration-550750
current-context: ""
kind: Config
users:
- name: cert-expiration-550750
user:
client-certificate: /home/jenkins/minikube-integration/21647-6010/.minikube/profiles/cert-expiration-550750/client.crt
client-key: /home/jenkins/minikube-integration/21647-6010/.minikube/profiles/cert-expiration-550750/client.key

                                                
                                                

                                                
                                                
>>> k8s: cms:
Error in configuration: context was not found for specified context: cilium-720125

                                                
                                                

                                                
                                                
>>> host: docker daemon status:
* Profile "cilium-720125" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-720125"

                                                
                                                

                                                
                                                
>>> host: docker daemon config:
* Profile "cilium-720125" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-720125"

                                                
                                                

                                                
                                                
>>> host: /etc/docker/daemon.json:
* Profile "cilium-720125" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-720125"

                                                
                                                

                                                
                                                
>>> host: docker system info:
* Profile "cilium-720125" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-720125"

                                                
                                                

                                                
                                                
>>> host: cri-docker daemon status:
* Profile "cilium-720125" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-720125"

                                                
                                                

                                                
                                                
>>> host: cri-docker daemon config:
* Profile "cilium-720125" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-720125"

                                                
                                                

                                                
                                                
>>> host: /etc/systemd/system/cri-docker.service.d/10-cni.conf:
* Profile "cilium-720125" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-720125"

                                                
                                                

                                                
                                                
>>> host: /usr/lib/systemd/system/cri-docker.service:
* Profile "cilium-720125" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-720125"

                                                
                                                

                                                
                                                
>>> host: cri-dockerd version:
* Profile "cilium-720125" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-720125"

                                                
                                                

                                                
                                                
>>> host: containerd daemon status:
* Profile "cilium-720125" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-720125"

                                                
                                                

                                                
                                                
>>> host: containerd daemon config:
* Profile "cilium-720125" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-720125"

                                                
                                                

                                                
                                                
>>> host: /lib/systemd/system/containerd.service:
* Profile "cilium-720125" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-720125"

                                                
                                                

                                                
                                                
>>> host: /etc/containerd/config.toml:
* Profile "cilium-720125" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-720125"

                                                
                                                

                                                
                                                
>>> host: containerd config dump:
* Profile "cilium-720125" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-720125"

                                                
                                                

                                                
                                                
>>> host: crio daemon status:
* Profile "cilium-720125" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-720125"

                                                
                                                

                                                
                                                
>>> host: crio daemon config:
* Profile "cilium-720125" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-720125"

                                                
                                                

                                                
                                                
>>> host: /etc/crio:
* Profile "cilium-720125" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-720125"

                                                
                                                

                                                
                                                
>>> host: crio config:
* Profile "cilium-720125" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-720125"

                                                
                                                
----------------------- debugLogs end: cilium-720125 [took: 5.294508431s] --------------------------------
helpers_test.go:175: Cleaning up "cilium-720125" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p cilium-720125
--- SKIP: TestNetworkPlugins/group/cilium (5.46s)

                                                
                                    
Copied to clipboard