Test Report: KVM_Linux 22128

                    
                      2cb2c94398211ca18cf7c1877ff6bae2d6b3d16e:2025-12-13:42756
                    
                

Test fail (2/452)

Order failed test Duration
460 TestStartStop/group/embed-certs/serial/Pause 39.3
484 TestStartStop/group/default-k8s-diff-port/serial/Pause 41.21
x
+
TestStartStop/group/embed-certs/serial/Pause (39.3s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/Pause
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-amd64 pause -p embed-certs-594077 --alsologtostderr -v=1
start_stop_delete_test.go:309: (dbg) Done: out/minikube-linux-amd64 pause -p embed-certs-594077 --alsologtostderr -v=1: (1.640302545s)
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p embed-certs-594077 -n embed-certs-594077
E1213 09:34:18.852172   13307 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22128-9390/.minikube/profiles/functional-016924/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
start_stop_delete_test.go:309: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.APIServer}} -p embed-certs-594077 -n embed-certs-594077: exit status 2 (15.799193726s)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:309: status error: exit status 2 (may be ok)
start_stop_delete_test.go:309: post-pause apiserver status = "Stopped"; want = "Paused"
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Kubelet}} -p embed-certs-594077 -n embed-certs-594077
start_stop_delete_test.go:309: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Kubelet}} -p embed-certs-594077 -n embed-certs-594077: exit status 2 (15.881375408s)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:309: status error: exit status 2 (may be ok)
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-amd64 unpause -p embed-certs-594077 --alsologtostderr -v=1
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p embed-certs-594077 -n embed-certs-594077
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Kubelet}} -p embed-certs-594077 -n embed-certs-594077
helpers_test.go:223: -----------------------post-mortem--------------------------------
helpers_test.go:224: ======>  post-mortem[TestStartStop/group/embed-certs/serial/Pause]: network settings <======
helpers_test.go:231: HOST ENV snapshots: PROXY env: HTTP_PROXY="<empty>" HTTPS_PROXY="<empty>" NO_PROXY="<empty>"
helpers_test.go:248: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p embed-certs-594077 -n embed-certs-594077
helpers_test.go:253: <<< TestStartStop/group/embed-certs/serial/Pause FAILED: start of post-mortem logs <<<
helpers_test.go:254: ======>  post-mortem[TestStartStop/group/embed-certs/serial/Pause]: minikube logs <======
helpers_test.go:256: (dbg) Run:  out/minikube-linux-amd64 -p embed-certs-594077 logs -n 25
helpers_test.go:256: (dbg) Done: out/minikube-linux-amd64 -p embed-certs-594077 logs -n 25: (1.659566589s)
helpers_test.go:261: TestStartStop/group/embed-certs/serial/Pause logs: 
-- stdout --
	
	==> Audit <==
	┌─────────┬───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┬──────────────────────────────┬─────────┬─────────┬─────────────────────┬─────────────────────┐
	│ COMMAND │                                                                                                           ARGS                                                                                                            │           PROFILE            │  USER   │ VERSION │     START TIME      │      END TIME       │
	├─────────┼───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┼──────────────────────────────┼─────────┼─────────┼─────────────────────┼─────────────────────┤
	│ addons  │ enable dashboard -p embed-certs-594077 --images=MetricsScraper=registry.k8s.io/echoserver:1.4                                                                                                                             │ embed-certs-594077           │ jenkins │ v1.37.0 │ 13 Dec 25 09:33 UTC │ 13 Dec 25 09:33 UTC │
	│ start   │ -p embed-certs-594077 --memory=3072 --alsologtostderr --wait=true --embed-certs --driver=kvm2  --kubernetes-version=v1.34.2                                                                                               │ embed-certs-594077           │ jenkins │ v1.37.0 │ 13 Dec 25 09:33 UTC │ 13 Dec 25 09:33 UTC │
	│ image   │ no-preload-616969 image list --format=json                                                                                                                                                                                │ no-preload-616969            │ jenkins │ v1.37.0 │ 13 Dec 25 09:33 UTC │ 13 Dec 25 09:33 UTC │
	│ pause   │ -p no-preload-616969 --alsologtostderr -v=1                                                                                                                                                                               │ no-preload-616969            │ jenkins │ v1.37.0 │ 13 Dec 25 09:33 UTC │ 13 Dec 25 09:33 UTC │
	│ unpause │ -p no-preload-616969 --alsologtostderr -v=1                                                                                                                                                                               │ no-preload-616969            │ jenkins │ v1.37.0 │ 13 Dec 25 09:33 UTC │ 13 Dec 25 09:33 UTC │
	│ delete  │ -p no-preload-616969                                                                                                                                                                                                      │ no-preload-616969            │ jenkins │ v1.37.0 │ 13 Dec 25 09:33 UTC │ 13 Dec 25 09:33 UTC │
	│ delete  │ -p no-preload-616969                                                                                                                                                                                                      │ no-preload-616969            │ jenkins │ v1.37.0 │ 13 Dec 25 09:33 UTC │ 13 Dec 25 09:33 UTC │
	│ start   │ -p auto-949855 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --driver=kvm2                                                                                                                               │ auto-949855                  │ jenkins │ v1.37.0 │ 13 Dec 25 09:33 UTC │                     │
	│ addons  │ enable metrics-server -p newest-cni-719997 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain                                                                                   │ newest-cni-719997            │ jenkins │ v1.37.0 │ 13 Dec 25 09:33 UTC │ 13 Dec 25 09:33 UTC │
	│ stop    │ -p newest-cni-719997 --alsologtostderr -v=3                                                                                                                                                                               │ newest-cni-719997            │ jenkins │ v1.37.0 │ 13 Dec 25 09:33 UTC │ 13 Dec 25 09:33 UTC │
	│ addons  │ enable dashboard -p newest-cni-719997 --images=MetricsScraper=registry.k8s.io/echoserver:1.4                                                                                                                              │ newest-cni-719997            │ jenkins │ v1.37.0 │ 13 Dec 25 09:33 UTC │ 13 Dec 25 09:33 UTC │
	│ start   │ -p newest-cni-719997 --memory=3072 --alsologtostderr --wait=apiserver,system_pods,default_sa --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=kvm2  --kubernetes-version=v1.35.0-beta.0 │ newest-cni-719997            │ jenkins │ v1.37.0 │ 13 Dec 25 09:33 UTC │ 13 Dec 25 09:34 UTC │
	│ image   │ embed-certs-594077 image list --format=json                                                                                                                                                                               │ embed-certs-594077           │ jenkins │ v1.37.0 │ 13 Dec 25 09:34 UTC │ 13 Dec 25 09:34 UTC │
	│ pause   │ -p embed-certs-594077 --alsologtostderr -v=1                                                                                                                                                                              │ embed-certs-594077           │ jenkins │ v1.37.0 │ 13 Dec 25 09:34 UTC │ 13 Dec 25 09:34 UTC │
	│ addons  │ enable metrics-server -p default-k8s-diff-port-018953 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain                                                                        │ default-k8s-diff-port-018953 │ jenkins │ v1.37.0 │ 13 Dec 25 09:34 UTC │ 13 Dec 25 09:34 UTC │
	│ stop    │ -p default-k8s-diff-port-018953 --alsologtostderr -v=3                                                                                                                                                                    │ default-k8s-diff-port-018953 │ jenkins │ v1.37.0 │ 13 Dec 25 09:34 UTC │ 13 Dec 25 09:34 UTC │
	│ image   │ newest-cni-719997 image list --format=json                                                                                                                                                                                │ newest-cni-719997            │ jenkins │ v1.37.0 │ 13 Dec 25 09:34 UTC │ 13 Dec 25 09:34 UTC │
	│ pause   │ -p newest-cni-719997 --alsologtostderr -v=1                                                                                                                                                                               │ newest-cni-719997            │ jenkins │ v1.37.0 │ 13 Dec 25 09:34 UTC │ 13 Dec 25 09:34 UTC │
	│ unpause │ -p newest-cni-719997 --alsologtostderr -v=1                                                                                                                                                                               │ newest-cni-719997            │ jenkins │ v1.37.0 │ 13 Dec 25 09:34 UTC │ 13 Dec 25 09:34 UTC │
	│ delete  │ -p newest-cni-719997                                                                                                                                                                                                      │ newest-cni-719997            │ jenkins │ v1.37.0 │ 13 Dec 25 09:34 UTC │ 13 Dec 25 09:34 UTC │
	│ delete  │ -p newest-cni-719997                                                                                                                                                                                                      │ newest-cni-719997            │ jenkins │ v1.37.0 │ 13 Dec 25 09:34 UTC │ 13 Dec 25 09:34 UTC │
	│ start   │ -p kindnet-949855 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=kindnet --driver=kvm2                                                                                                              │ kindnet-949855               │ jenkins │ v1.37.0 │ 13 Dec 25 09:34 UTC │                     │
	│ addons  │ enable dashboard -p default-k8s-diff-port-018953 --images=MetricsScraper=registry.k8s.io/echoserver:1.4                                                                                                                   │ default-k8s-diff-port-018953 │ jenkins │ v1.37.0 │ 13 Dec 25 09:34 UTC │ 13 Dec 25 09:34 UTC │
	│ start   │ -p default-k8s-diff-port-018953 --memory=3072 --alsologtostderr --wait=true --apiserver-port=8444 --driver=kvm2  --kubernetes-version=v1.34.2                                                                             │ default-k8s-diff-port-018953 │ jenkins │ v1.37.0 │ 13 Dec 25 09:34 UTC │                     │
	│ unpause │ -p embed-certs-594077 --alsologtostderr -v=1                                                                                                                                                                              │ embed-certs-594077           │ jenkins │ v1.37.0 │ 13 Dec 25 09:34 UTC │ 13 Dec 25 09:34 UTC │
	└─────────┴───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┴──────────────────────────────┴─────────┴─────────┴─────────────────────┴─────────────────────┘
	
	
	==> Last Start <==
	Log file created at: 2025/12/13 09:34:44
	Running on machine: ubuntu-20-agent-3
	Binary: Built with gc go1.25.5 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1213 09:34:44.133654   50144 out.go:360] Setting OutFile to fd 1 ...
	I1213 09:34:44.133909   50144 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1213 09:34:44.133917   50144 out.go:374] Setting ErrFile to fd 2...
	I1213 09:34:44.133921   50144 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1213 09:34:44.134131   50144 root.go:338] Updating PATH: /home/jenkins/minikube-integration/22128-9390/.minikube/bin
	I1213 09:34:44.134591   50144 out.go:368] Setting JSON to false
	I1213 09:34:44.135680   50144 start.go:133] hostinfo: {"hostname":"ubuntu-20-agent-3","uptime":4634,"bootTime":1765613850,"procs":201,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"22.04","kernelVersion":"6.8.0-1045-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I1213 09:34:44.135763   50144 start.go:143] virtualization: kvm guest
	I1213 09:34:44.137725   50144 out.go:179] * [default-k8s-diff-port-018953] minikube v1.37.0 on Ubuntu 22.04 (kvm/amd64)
	I1213 09:34:44.139291   50144 notify.go:221] Checking for updates...
	I1213 09:34:44.139324   50144 out.go:179]   - MINIKUBE_LOCATION=22128
	I1213 09:34:44.141030   50144 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1213 09:34:44.142532   50144 out.go:179]   - KUBECONFIG=/home/jenkins/minikube-integration/22128-9390/kubeconfig
	I1213 09:34:44.145292   50144 out.go:179]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/22128-9390/.minikube
	I1213 09:34:44.146816   50144 out.go:179]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I1213 09:34:44.148267   50144 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I1213 09:34:44.150282   50144 config.go:182] Loaded profile config "default-k8s-diff-port-018953": Driver=kvm2, ContainerRuntime=docker, KubernetesVersion=v1.34.2
	I1213 09:34:44.150781   50144 driver.go:422] Setting default libvirt URI to qemu:///system
	I1213 09:34:44.194033   50144 out.go:179] * Using the kvm2 driver based on existing profile
	I1213 09:34:44.195572   50144 start.go:309] selected driver: kvm2
	I1213 09:34:44.195598   50144 start.go:927] validating driver "kvm2" against &{Name:default-k8s-diff-port-018953 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/22101/minikube-v1.37.0-1765481609-22101-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765275396-22083@sha256:ffa93f7bad1d2c0a7acfa6e97f1eec0e4955680d08c3904e49db297a10f7f89f Memory:3072 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8444 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesCo
nfig:{KubernetesVersion:v1.34.2 ClusterName:default-k8s-diff-port-018953 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.50.59 Port:8444 KubernetesVersion:v1.34.2 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] Li
stenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1213 09:34:44.195711   50144 start.go:938] status for kvm2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1213 09:34:44.196775   50144 start_flags.go:992] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1213 09:34:44.196810   50144 cni.go:84] Creating CNI manager for ""
	I1213 09:34:44.196896   50144 cni.go:158] "kvm2" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I1213 09:34:44.196958   50144 start.go:353] cluster config:
	{Name:default-k8s-diff-port-018953 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/22101/minikube-v1.37.0-1765481609-22101-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765275396-22083@sha256:ffa93f7bad1d2c0a7acfa6e97f1eec0e4955680d08c3904e49db297a10f7f89f Memory:3072 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8444 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.2 ClusterName:default-k8s-diff-port-018953 Names
pace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.50.59 Port:8444 KubernetesVersion:v1.34.2 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpira
tion:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1213 09:34:44.197055   50144 iso.go:125] acquiring lock: {Name:mka70bc7358d71723b0212976cce8aaa1cb0bc58 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1213 09:34:44.198938   50144 out.go:179] * Starting "default-k8s-diff-port-018953" primary control-plane node in "default-k8s-diff-port-018953" cluster
	I1213 09:34:42.777697   49982 main.go:143] libmachine: domain kindnet-949855 has defined MAC address 52:54:00:35:93:4c in network mk-kindnet-949855
	I1213 09:34:42.778596   49982 main.go:143] libmachine: no network interface addresses found for domain kindnet-949855 (source=lease)
	I1213 09:34:42.778617   49982 main.go:143] libmachine: trying to list again with source=arp
	I1213 09:34:42.779063   49982 main.go:143] libmachine: unable to find current IP address of domain kindnet-949855 in network mk-kindnet-949855 (interfaces detected: [])
	I1213 09:34:42.779098   49982 retry.go:31] will retry after 1.16996515s: waiting for domain to come up
	I1213 09:34:43.950913   49982 main.go:143] libmachine: domain kindnet-949855 has defined MAC address 52:54:00:35:93:4c in network mk-kindnet-949855
	I1213 09:34:43.951731   49982 main.go:143] libmachine: no network interface addresses found for domain kindnet-949855 (source=lease)
	I1213 09:34:43.951754   49982 main.go:143] libmachine: trying to list again with source=arp
	I1213 09:34:43.952220   49982 main.go:143] libmachine: unable to find current IP address of domain kindnet-949855 in network mk-kindnet-949855 (interfaces detected: [])
	I1213 09:34:43.952273   49982 retry.go:31] will retry after 990.024449ms: waiting for domain to come up
	I1213 09:34:44.943737   49982 main.go:143] libmachine: domain kindnet-949855 has defined MAC address 52:54:00:35:93:4c in network mk-kindnet-949855
	I1213 09:34:44.944673   49982 main.go:143] libmachine: no network interface addresses found for domain kindnet-949855 (source=lease)
	I1213 09:34:44.944698   49982 main.go:143] libmachine: trying to list again with source=arp
	I1213 09:34:44.945220   49982 main.go:143] libmachine: unable to find current IP address of domain kindnet-949855 in network mk-kindnet-949855 (interfaces detected: [])
	I1213 09:34:44.945259   49982 retry.go:31] will retry after 1.213110356s: waiting for domain to come up
	I1213 09:34:46.159702   49982 main.go:143] libmachine: domain kindnet-949855 has defined MAC address 52:54:00:35:93:4c in network mk-kindnet-949855
	I1213 09:34:46.160662   49982 main.go:143] libmachine: no network interface addresses found for domain kindnet-949855 (source=lease)
	I1213 09:34:46.160685   49982 main.go:143] libmachine: trying to list again with source=arp
	I1213 09:34:46.161142   49982 main.go:143] libmachine: unable to find current IP address of domain kindnet-949855 in network mk-kindnet-949855 (interfaces detected: [])
	I1213 09:34:46.161190   49982 retry.go:31] will retry after 2.219294638s: waiting for domain to come up
	W1213 09:34:45.255022   48864 pod_ready.go:104] pod "coredns-66bc5c9577-chjjw" is not "Ready", error: <nil>
	W1213 09:34:47.754969   48864 pod_ready.go:104] pod "coredns-66bc5c9577-chjjw" is not "Ready", error: <nil>
	I1213 09:34:44.200532   50144 preload.go:188] Checking if preload exists for k8s version v1.34.2 and runtime docker
	I1213 09:34:44.200573   50144 preload.go:203] Found local preload: /home/jenkins/minikube-integration/22128-9390/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.2-docker-overlay2-amd64.tar.lz4
	I1213 09:34:44.200590   50144 cache.go:65] Caching tarball of preloaded images
	I1213 09:34:44.200687   50144 preload.go:238] Found /home/jenkins/minikube-integration/22128-9390/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.2-docker-overlay2-amd64.tar.lz4 in cache, skipping download
	I1213 09:34:44.200700   50144 cache.go:68] Finished verifying existence of preloaded tar for v1.34.2 on docker
	I1213 09:34:44.200800   50144 profile.go:143] Saving config to /home/jenkins/minikube-integration/22128-9390/.minikube/profiles/default-k8s-diff-port-018953/config.json ...
	I1213 09:34:44.201085   50144 start.go:360] acquireMachinesLock for default-k8s-diff-port-018953: {Name:mk5011dd8641588b44f3b8805193aca1c9f0973f Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	
	
	==> Docker <==
	Dec 13 09:33:54 embed-certs-594077 dockerd[1179]: time="2025-12-13T09:33:54.656562567Z" level=warning msg="Error persisting manifest" digest="sha256:5d99aa1120524c801bc8c1a7077e8f5ec122ba16b6dda1a5d3826057f67b9bcb" error="error committing manifest to content store: commit failed: unexpected commit digest sha256:eaee4c452b076cdb05b391ed7e75e1ad0aca136665875ab5d7e2f3d9f4675769, expected sha256:5d99aa1120524c801bc8c1a7077e8f5ec122ba16b6dda1a5d3826057f67b9bcb: failed precondition" remote="registry.k8s.io/echoserver:1.4"
	Dec 13 09:33:54 embed-certs-594077 dockerd[1179]: time="2025-12-13T09:33:54.656735001Z" level=info msg="Attempting next endpoint for pull after error: Docker Image Format v1 and Docker Image manifest version 2, schema 1 support has been removed. Suggest the author of registry.k8s.io/echoserver:1.4 to upgrade the image to the OCI Format or Docker Image manifest v2, schema 2. More information at https://docs.docker.com/go/deprecated-image-specs/"
	Dec 13 09:33:54 embed-certs-594077 cri-dockerd[1561]: time="2025-12-13T09:33:54Z" level=info msg="Stop pulling image registry.k8s.io/echoserver:1.4: 1.4: Pulling from echoserver"
	Dec 13 09:33:54 embed-certs-594077 dockerd[1179]: time="2025-12-13T09:33:54.873702256Z" level=warning msg="reference for unknown type: " digest="sha256:2e500d29e9d5f4a086b908eb8dfe7ecac57d2ab09d65b24f588b1d449841ef93" remote="docker.io/kubernetesui/dashboard@sha256:2e500d29e9d5f4a086b908eb8dfe7ecac57d2ab09d65b24f588b1d449841ef93"
	Dec 13 09:34:03 embed-certs-594077 cri-dockerd[1561]: time="2025-12-13T09:34:03Z" level=info msg="Stop pulling image docker.io/kubernetesui/dashboard:v2.7.0@sha256:2e500d29e9d5f4a086b908eb8dfe7ecac57d2ab09d65b24f588b1d449841ef93: Status: Downloaded newer image for kubernetesui/dashboard@sha256:2e500d29e9d5f4a086b908eb8dfe7ecac57d2ab09d65b24f588b1d449841ef93"
	Dec 13 09:34:09 embed-certs-594077 dockerd[1179]: time="2025-12-13T09:34:09.467828020Z" level=warning msg="reference for unknown type: application/vnd.docker.distribution.manifest.v1+prettyjws" digest="sha256:5d99aa1120524c801bc8c1a7077e8f5ec122ba16b6dda1a5d3826057f67b9bcb" remote="registry.k8s.io/echoserver:1.4"
	Dec 13 09:34:09 embed-certs-594077 dockerd[1179]: time="2025-12-13T09:34:09.540651785Z" level=warning msg="Error persisting manifest" digest="sha256:5d99aa1120524c801bc8c1a7077e8f5ec122ba16b6dda1a5d3826057f67b9bcb" error="error committing manifest to content store: commit failed: unexpected commit digest sha256:eaee4c452b076cdb05b391ed7e75e1ad0aca136665875ab5d7e2f3d9f4675769, expected sha256:5d99aa1120524c801bc8c1a7077e8f5ec122ba16b6dda1a5d3826057f67b9bcb: failed precondition" remote="registry.k8s.io/echoserver:1.4"
	Dec 13 09:34:09 embed-certs-594077 dockerd[1179]: time="2025-12-13T09:34:09.542156265Z" level=info msg="Attempting next endpoint for pull after error: Docker Image Format v1 and Docker Image manifest version 2, schema 1 support has been removed. Suggest the author of registry.k8s.io/echoserver:1.4 to upgrade the image to the OCI Format or Docker Image manifest v2, schema 2. More information at https://docs.docker.com/go/deprecated-image-specs/"
	Dec 13 09:34:09 embed-certs-594077 cri-dockerd[1561]: time="2025-12-13T09:34:09Z" level=info msg="Stop pulling image registry.k8s.io/echoserver:1.4: 1.4: Pulling from echoserver"
	Dec 13 09:34:09 embed-certs-594077 dockerd[1179]: time="2025-12-13T09:34:09.567348185Z" level=warning msg="Error getting v2 registry: Get \"https://fake.domain/v2/\": dial tcp: lookup fake.domain on 192.168.122.1:53: no such host"
	Dec 13 09:34:09 embed-certs-594077 dockerd[1179]: time="2025-12-13T09:34:09.567521379Z" level=info msg="Attempting next endpoint for pull after error: Get \"https://fake.domain/v2/\": dial tcp: lookup fake.domain on 192.168.122.1:53: no such host"
	Dec 13 09:34:09 embed-certs-594077 dockerd[1179]: time="2025-12-13T09:34:09.575597440Z" level=error msg="unexpected HTTP error handling" error="<nil>"
	Dec 13 09:34:09 embed-certs-594077 dockerd[1179]: time="2025-12-13T09:34:09.575676593Z" level=error msg="Handler for POST /v1.46/images/create returned error: Get \"https://fake.domain/v2/\": dial tcp: lookup fake.domain on 192.168.122.1:53: no such host"
	Dec 13 09:34:17 embed-certs-594077 dockerd[1179]: time="2025-12-13T09:34:17.140741400Z" level=error msg="Handler for POST /v1.51/containers/de05857e10ed/pause returned error: cannot pause container de05857e10ed14d338591b8d140c8fdbffcc13e5cdf3dc4d04b3f6eabfd47af5: OCI runtime pause failed: container not running"
	Dec 13 09:34:17 embed-certs-594077 dockerd[1179]: time="2025-12-13T09:34:17.193091447Z" level=info msg="ignoring event" container=de05857e10ed14d338591b8d140c8fdbffcc13e5cdf3dc4d04b3f6eabfd47af5 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Dec 13 09:34:50 embed-certs-594077 cri-dockerd[1561]: time="2025-12-13T09:34:50Z" level=info msg="Failed to read pod IP from plugin/docker: networkPlugin cni failed on the status hook for pod \"coredns-66bc5c9577-pg6d8_kube-system\": CNI failed to retrieve network namespace path: cannot find network namespace for the terminated container \"88f1a58b376611f492c5b508834009cd114167f31ab62ec3d85fc7744f5c10b4\""
	Dec 13 09:34:51 embed-certs-594077 cri-dockerd[1561]: time="2025-12-13T09:34:51Z" level=info msg="Docker cri received runtime config &RuntimeConfig{NetworkConfig:&NetworkConfig{PodCidr:10.244.0.0/24,},}"
	Dec 13 09:34:52 embed-certs-594077 dockerd[1179]: time="2025-12-13T09:34:52.000059166Z" level=warning msg="reference for unknown type: application/vnd.docker.distribution.manifest.v1+prettyjws" digest="sha256:5d99aa1120524c801bc8c1a7077e8f5ec122ba16b6dda1a5d3826057f67b9bcb" remote="registry.k8s.io/echoserver:1.4"
	Dec 13 09:34:52 embed-certs-594077 dockerd[1179]: time="2025-12-13T09:34:52.107124399Z" level=warning msg="Error persisting manifest" digest="sha256:5d99aa1120524c801bc8c1a7077e8f5ec122ba16b6dda1a5d3826057f67b9bcb" error="error committing manifest to content store: commit failed: unexpected commit digest sha256:eaee4c452b076cdb05b391ed7e75e1ad0aca136665875ab5d7e2f3d9f4675769, expected sha256:5d99aa1120524c801bc8c1a7077e8f5ec122ba16b6dda1a5d3826057f67b9bcb: failed precondition" remote="registry.k8s.io/echoserver:1.4"
	Dec 13 09:34:52 embed-certs-594077 dockerd[1179]: time="2025-12-13T09:34:52.107248417Z" level=info msg="Attempting next endpoint for pull after error: Docker Image Format v1 and Docker Image manifest version 2, schema 1 support has been removed. Suggest the author of registry.k8s.io/echoserver:1.4 to upgrade the image to the OCI Format or Docker Image manifest v2, schema 2. More information at https://docs.docker.com/go/deprecated-image-specs/"
	Dec 13 09:34:52 embed-certs-594077 cri-dockerd[1561]: time="2025-12-13T09:34:52Z" level=info msg="Stop pulling image registry.k8s.io/echoserver:1.4: 1.4: Pulling from echoserver"
	Dec 13 09:34:52 embed-certs-594077 dockerd[1179]: time="2025-12-13T09:34:52.140813005Z" level=warning msg="Error getting v2 registry: Get \"https://fake.domain/v2/\": dial tcp: lookup fake.domain on 192.168.122.1:53: no such host"
	Dec 13 09:34:52 embed-certs-594077 dockerd[1179]: time="2025-12-13T09:34:52.140880277Z" level=info msg="Attempting next endpoint for pull after error: Get \"https://fake.domain/v2/\": dial tcp: lookup fake.domain on 192.168.122.1:53: no such host"
	Dec 13 09:34:52 embed-certs-594077 dockerd[1179]: time="2025-12-13T09:34:52.152224216Z" level=error msg="unexpected HTTP error handling" error="<nil>"
	Dec 13 09:34:52 embed-certs-594077 dockerd[1179]: time="2025-12-13T09:34:52.152379674Z" level=error msg="Handler for POST /v1.46/images/create returned error: Get \"https://fake.domain/v2/\": dial tcp: lookup fake.domain on 192.168.122.1:53: no such host"
	
	
	==> container status <==
	CONTAINER           IMAGE                                                                                                 CREATED              STATE               NAME                      ATTEMPT             POD ID              POD                                          NAMESPACE
	2f643a44c0947       6e38f40d628db                                                                                         1 second ago         Running             storage-provisioner       2                   f0fe97ebd2fa8       storage-provisioner                          kube-system
	9d744ae1656d5       kubernetesui/dashboard@sha256:2e500d29e9d5f4a086b908eb8dfe7ecac57d2ab09d65b24f588b1d449841ef93        49 seconds ago       Running             kubernetes-dashboard      0                   655aad46b16e4       kubernetes-dashboard-855c9754f9-5ckvx        kubernetes-dashboard
	e383d4e28bee5       56cc512116c8f                                                                                         58 seconds ago       Running             busybox                   1                   88fedb324336f       busybox                                      default
	3500352ae1887       52546a367cc9e                                                                                         58 seconds ago       Running             coredns                   1                   02321cceca25c       coredns-66bc5c9577-sbl6b                     kube-system
	de05857e10ed1       6e38f40d628db                                                                                         About a minute ago   Exited              storage-provisioner       1                   f0fe97ebd2fa8       storage-provisioner                          kube-system
	3b9abac9a0e5e       8aa150647e88a                                                                                         About a minute ago   Running             kube-proxy                1                   0185479b8f1ac       kube-proxy-gbh4v                             kube-system
	652f8878d5fe5       a3e246e9556e9                                                                                         About a minute ago   Running             etcd                      1                   06277dacc9521       etcd-embed-certs-594077                      kube-system
	8cac4fb329021       88320b5498ff2                                                                                         About a minute ago   Running             kube-scheduler            1                   a72c06cffcc53       kube-scheduler-embed-certs-594077            kube-system
	bcf2fd0416777       01e8bacf0f500                                                                                         About a minute ago   Running             kube-controller-manager   1                   064f32bea94a2       kube-controller-manager-embed-certs-594077   kube-system
	ea6f4d67228a1       a5f569d49a979                                                                                         About a minute ago   Running             kube-apiserver            1                   d5b4d42f70f7a       kube-apiserver-embed-certs-594077            kube-system
	ceb2c2191e490       gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e   2 minutes ago        Exited              busybox                   0                   1082bf842642a       busybox                                      default
	a2c91c9fb48e6       52546a367cc9e                                                                                         2 minutes ago        Exited              coredns                   0                   299749fc58f7b       coredns-66bc5c9577-sbl6b                     kube-system
	08fadc68f466b       8aa150647e88a                                                                                         2 minutes ago        Exited              kube-proxy                0                   acc0a3cff3053       kube-proxy-gbh4v                             kube-system
	6e6c8e89a43c7       a5f569d49a979                                                                                         3 minutes ago        Exited              kube-apiserver            0                   3f64649de4057       kube-apiserver-embed-certs-594077            kube-system
	d6604faaddf3f       a3e246e9556e9                                                                                         3 minutes ago        Exited              etcd                      0                   45afc8f5a4c50       etcd-embed-certs-594077                      kube-system
	4b2a5a8f531e3       01e8bacf0f500                                                                                         3 minutes ago        Exited              kube-controller-manager   0                   b2be7e1ac613b       kube-controller-manager-embed-certs-594077   kube-system
	cf9e0b0dcbf9b       88320b5498ff2                                                                                         3 minutes ago        Exited              kube-scheduler            0                   356edcdb1aadc       kube-scheduler-embed-certs-594077            kube-system
	
	
	==> coredns [3500352ae188] <==
	maxprocs: Leaving GOMAXPROCS=2: CPU quota undefined
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = 680cec097987c24242735352e9de77b2ba657caea131666c4002607b6f81fb6322fe6fa5c2d434be3fcd1251845cd6b7641e3a08a7d3b88486730de31a010646
	CoreDNS-1.12.1
	linux/amd64, go1.24.1, 707c7c1
	[INFO] 127.0.0.1:52332 - 31954 "HINFO IN 7552130428793522761.6479760196523847134. udp 57 false 512" NXDOMAIN qr,rd,ra 132 0.112161133s
	
	
	==> coredns [a2c91c9fb48e] <==
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[WARNING] plugin/kubernetes: starting server with unsynced Kubernetes API
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = 1b226df79860026c6a52e67daa10d7f0d57ec5b023288ec00c5e05f93523c894564e15b91770d3a07ae1cfbe861d15b37d4a0027e69c546ab112970993a3b03b
	CoreDNS-1.12.1
	linux/amd64, go1.24.1, 707c7c1
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.32.3/tools/cache/reflector.go:251: failed to list *v1.Namespace: Get "https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: i/o timeout
	[ERROR] plugin/kubernetes: Unhandled Error
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.32.3/tools/cache/reflector.go:251: failed to list *v1.EndpointSlice: Get "https://10.96.0.1:443/apis/discovery.k8s.io/v1/endpointslices?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: i/o timeout
	[ERROR] plugin/kubernetes: Unhandled Error
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.32.3/tools/cache/reflector.go:251: failed to list *v1.Service: Get "https://10.96.0.1:443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: i/o timeout
	[ERROR] plugin/kubernetes: Unhandled Error
	[INFO] Reloading
	[INFO] plugin/reload: Running configuration SHA512 = 680cec097987c24242735352e9de77b2ba657caea131666c4002607b6f81fb6322fe6fa5c2d434be3fcd1251845cd6b7641e3a08a7d3b88486730de31a010646
	[INFO] Reloading complete
	[INFO] 127.0.0.1:36208 - 41315 "HINFO IN 1358106524289017339.4675404298798629450. udp 57 false 512" NXDOMAIN qr,rd,ra 132 0.043234961s
	[INFO] SIGTERM: Shutting down servers then terminating
	[INFO] plugin/health: Going into lameduck mode for 5s
	
	
	==> describe nodes <==
	Name:               embed-certs-594077
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=embed-certs-594077
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=fb16b7642350f383695d44d1e88d7327f6f14453
	                    minikube.k8s.io/name=embed-certs-594077
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2025_12_13T09_31_49_0700
	                    minikube.k8s.io/version=v1.37.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Sat, 13 Dec 2025 09:31:45 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  embed-certs-594077
	  AcquireTime:     <unset>
	  RenewTime:       Sat, 13 Dec 2025 09:34:50 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Sat, 13 Dec 2025 09:34:51 +0000   Sat, 13 Dec 2025 09:31:43 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Sat, 13 Dec 2025 09:34:51 +0000   Sat, 13 Dec 2025 09:31:43 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Sat, 13 Dec 2025 09:34:51 +0000   Sat, 13 Dec 2025 09:31:43 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Sat, 13 Dec 2025 09:34:51 +0000   Sat, 13 Dec 2025 09:33:49 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.39.5
	  Hostname:    embed-certs-594077
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             3035908Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             3035908Ki
	  pods:               110
	System Info:
	  Machine ID:                 3f9ed15ee5214a3682f9a8b37f59f7e2
	  System UUID:                3f9ed15e-e521-4a36-82f9-a8b37f59f7e2
	  Boot ID:                    5905dae6-5187-479b-bc88-9a3ad2e0e23b
	  Kernel Version:             6.6.95
	  OS Image:                   Buildroot 2025.02
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  docker://28.5.2
	  Kubelet Version:            v1.34.2
	  Kube-Proxy Version:         
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (11 in total)
	  Namespace                   Name                                          CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                          ------------  ----------  ---------------  -------------  ---
	  default                     busybox                                       0 (0%)        0 (0%)      0 (0%)           0 (0%)         2m16s
	  kube-system                 coredns-66bc5c9577-sbl6b                      100m (5%)     0 (0%)      70Mi (2%)        170Mi (5%)     2m58s
	  kube-system                 etcd-embed-certs-594077                       100m (5%)     0 (0%)      100Mi (3%)       0 (0%)         3m4s
	  kube-system                 kube-apiserver-embed-certs-594077             250m (12%)    0 (0%)      0 (0%)           0 (0%)         3m4s
	  kube-system                 kube-controller-manager-embed-certs-594077    200m (10%)    0 (0%)      0 (0%)           0 (0%)         3m4s
	  kube-system                 kube-proxy-gbh4v                              0 (0%)        0 (0%)      0 (0%)           0 (0%)         2m58s
	  kube-system                 kube-scheduler-embed-certs-594077             100m (5%)     0 (0%)      0 (0%)           0 (0%)         3m4s
	  kube-system                 metrics-server-746fcd58dc-r9qzb               100m (5%)     0 (0%)      200Mi (6%)       0 (0%)         2m5s
	  kube-system                 storage-provisioner                           0 (0%)        0 (0%)      0 (0%)           0 (0%)         2m54s
	  kubernetes-dashboard        dashboard-metrics-scraper-6ffb444bf9-42zcv    0 (0%)        0 (0%)      0 (0%)           0 (0%)         61s
	  kubernetes-dashboard        kubernetes-dashboard-855c9754f9-5ckvx         0 (0%)        0 (0%)      0 (0%)           0 (0%)         61s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests     Limits
	  --------           --------     ------
	  cpu                850m (42%)   0 (0%)
	  memory             370Mi (12%)  170Mi (5%)
	  ephemeral-storage  0 (0%)       0 (0%)
	  hugepages-2Mi      0 (0%)       0 (0%)
	Events:
	  Type     Reason                   Age                    From             Message
	  ----     ------                   ----                   ----             -------
	  Normal   Starting                 2m55s                  kube-proxy       
	  Normal   Starting                 65s                    kube-proxy       
	  Normal   Starting                 3m12s                  kubelet          Starting kubelet.
	  Normal   NodeHasSufficientMemory  3m12s (x8 over 3m12s)  kubelet          Node embed-certs-594077 status is now: NodeHasSufficientMemory
	  Normal   NodeHasNoDiskPressure    3m12s (x8 over 3m12s)  kubelet          Node embed-certs-594077 status is now: NodeHasNoDiskPressure
	  Normal   NodeHasSufficientPID     3m12s (x7 over 3m12s)  kubelet          Node embed-certs-594077 status is now: NodeHasSufficientPID
	  Normal   NodeAllocatableEnforced  3m12s                  kubelet          Updated Node Allocatable limit across pods
	  Normal   Starting                 3m4s                   kubelet          Starting kubelet.
	  Normal   NodeAllocatableEnforced  3m4s                   kubelet          Updated Node Allocatable limit across pods
	  Normal   NodeHasSufficientMemory  3m4s                   kubelet          Node embed-certs-594077 status is now: NodeHasSufficientMemory
	  Normal   NodeHasNoDiskPressure    3m4s                   kubelet          Node embed-certs-594077 status is now: NodeHasNoDiskPressure
	  Normal   NodeHasSufficientPID     3m4s                   kubelet          Node embed-certs-594077 status is now: NodeHasSufficientPID
	  Normal   NodeReady                3m                     kubelet          Node embed-certs-594077 status is now: NodeReady
	  Normal   RegisteredNode           2m59s                  node-controller  Node embed-certs-594077 event: Registered Node embed-certs-594077 in Controller
	  Normal   Starting                 74s                    kubelet          Starting kubelet.
	  Normal   NodeHasSufficientMemory  74s (x8 over 74s)      kubelet          Node embed-certs-594077 status is now: NodeHasSufficientMemory
	  Normal   NodeHasNoDiskPressure    74s (x8 over 74s)      kubelet          Node embed-certs-594077 status is now: NodeHasNoDiskPressure
	  Normal   NodeHasSufficientPID     74s (x7 over 74s)      kubelet          Node embed-certs-594077 status is now: NodeHasSufficientPID
	  Normal   NodeAllocatableEnforced  74s                    kubelet          Updated Node Allocatable limit across pods
	  Warning  Rebooted                 68s                    kubelet          Node embed-certs-594077 has been rebooted, boot id: 5905dae6-5187-479b-bc88-9a3ad2e0e23b
	  Normal   RegisteredNode           62s                    node-controller  Node embed-certs-594077 event: Registered Node embed-certs-594077 in Controller
	  Normal   Starting                 2s                     kubelet          Starting kubelet.
	  Normal   NodeAllocatableEnforced  2s                     kubelet          Updated Node Allocatable limit across pods
	  Normal   NodeHasSufficientMemory  1s                     kubelet          Node embed-certs-594077 status is now: NodeHasSufficientMemory
	  Normal   NodeHasNoDiskPressure    1s                     kubelet          Node embed-certs-594077 status is now: NodeHasNoDiskPressure
	  Normal   NodeHasSufficientPID     1s                     kubelet          Node embed-certs-594077 status is now: NodeHasSufficientPID
	
	
	==> dmesg <==
	[Dec13 09:33] Booted with the nomodeset parameter. Only the system framebuffer will be available
	[  +0.000010] acpi PNP0A03:00: fail to add MMCONFIG information, can't access extended configuration space under this bridge
	[  +0.001642] platform regulatory.0: Direct firmware load for regulatory.db failed with error -2
	[  +0.004177] (rpcbind)[120]: rpcbind.service: Referenced but unset environment variable evaluates to an empty string: RPCBIND_OPTIONS
	[  +0.886210] NFSD: Using /var/lib/nfs/v4recovery as the NFSv4 state recovery directory
	[  +0.000026] NFSD: unable to find recovery directory /var/lib/nfs/v4recovery
	[  +0.000001] NFSD: Unable to initialize client recovery tracking! (-2)
	[  +0.119341] kauditd_printk_skb: 1 callbacks suppressed
	[  +0.139009] kauditd_printk_skb: 421 callbacks suppressed
	[  +8.056256] kauditd_printk_skb: 193 callbacks suppressed
	[  +2.474062] kauditd_printk_skb: 128 callbacks suppressed
	[  +0.838025] kauditd_printk_skb: 259 callbacks suppressed
	[Dec13 09:34] kauditd_printk_skb: 2 callbacks suppressed
	[  +0.277121] kauditd_printk_skb: 11 callbacks suppressed
	[  +0.213014] kauditd_printk_skb: 35 callbacks suppressed
	
	
	==> etcd [652f8878d5fe] <==
	{"level":"warn","ts":"2025-12-13T09:33:42.931665Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:53924","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-13T09:33:42.982941Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:53938","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-13T09:33:43.000194Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:53944","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-13T09:33:43.029221Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:53970","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-13T09:33:43.044582Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:53974","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-13T09:33:43.070524Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:53996","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-13T09:33:43.114197Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:54022","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-13T09:33:43.160976Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:54040","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-13T09:33:43.185685Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:54060","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-13T09:33:43.223885Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:54098","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-13T09:33:43.236727Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:54120","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-13T09:33:43.245766Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:54082","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-13T09:33:43.265875Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:54136","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-13T09:33:43.278357Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:54154","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-13T09:33:43.291236Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:54174","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-13T09:33:43.309255Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:54196","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-13T09:33:43.331705Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:54224","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-13T09:33:43.390604Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:54242","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-13T09:33:43.413099Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:54254","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-13T09:33:43.484942Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:54256","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-13T09:33:43.540577Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:54264","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-13T09:33:43.562762Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:54284","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-13T09:33:43.586381Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:54300","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-13T09:33:43.605698Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:54320","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-13T09:33:43.726718Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:54340","server-name":"","error":"EOF"}
	
	
	==> etcd [d6604faaddf3] <==
	{"level":"warn","ts":"2025-12-13T09:31:43.869789Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:51370","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-13T09:31:43.901134Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:51388","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-13T09:31:43.939127Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:51400","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-13T09:31:43.963838Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:51414","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-13T09:31:43.987753Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:51430","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-13T09:31:44.019434Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:51448","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-13T09:31:44.214190Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:51468","server-name":"","error":"EOF"}
	{"level":"info","ts":"2025-12-13T09:32:49.290228Z","caller":"osutil/interrupt_unix.go:65","msg":"received signal; shutting down","signal":"terminated"}
	{"level":"info","ts":"2025-12-13T09:32:49.290316Z","caller":"embed/etcd.go:426","msg":"closing etcd server","name":"embed-certs-594077","data-dir":"/var/lib/minikube/etcd","advertise-peer-urls":["https://192.168.39.5:2380"],"advertise-client-urls":["https://192.168.39.5:2379"]}
	{"level":"error","ts":"2025-12-13T09:32:49.290413Z","caller":"embed/etcd.go:912","msg":"setting up serving from embedded etcd failed.","error":"http: Server closed","stacktrace":"go.etcd.io/etcd/server/v3/embed.(*Etcd).errHandler\n\tgo.etcd.io/etcd/server/v3/embed/etcd.go:912\ngo.etcd.io/etcd/server/v3/embed.(*serveCtx).startHandler.func1\n\tgo.etcd.io/etcd/server/v3/embed/serve.go:90"}
	{"level":"error","ts":"2025-12-13T09:32:56.297824Z","caller":"embed/etcd.go:912","msg":"setting up serving from embedded etcd failed.","error":"http: Server closed","stacktrace":"go.etcd.io/etcd/server/v3/embed.(*Etcd).errHandler\n\tgo.etcd.io/etcd/server/v3/embed/etcd.go:912\ngo.etcd.io/etcd/server/v3/embed.(*serveCtx).startHandler.func1\n\tgo.etcd.io/etcd/server/v3/embed/serve.go:90"}
	{"level":"error","ts":"2025-12-13T09:32:56.297931Z","caller":"embed/etcd.go:912","msg":"setting up serving from embedded etcd failed.","error":"accept tcp 127.0.0.1:2381: use of closed network connection","stacktrace":"go.etcd.io/etcd/server/v3/embed.(*Etcd).errHandler\n\tgo.etcd.io/etcd/server/v3/embed/etcd.go:912\ngo.etcd.io/etcd/server/v3/embed.(*Etcd).startHandler.func1\n\tgo.etcd.io/etcd/server/v3/embed/etcd.go:906"}
	{"level":"info","ts":"2025-12-13T09:32:56.297953Z","caller":"etcdserver/server.go:1297","msg":"skipped leadership transfer for single voting member cluster","local-member-id":"c5263387c79c0223","current-leader-member-id":"c5263387c79c0223"}
	{"level":"info","ts":"2025-12-13T09:32:56.298053Z","caller":"etcdserver/server.go:2358","msg":"server has stopped; stopping storage version's monitor"}
	{"level":"info","ts":"2025-12-13T09:32:56.298064Z","caller":"etcdserver/server.go:2335","msg":"server has stopped; stopping cluster version's monitor"}
	{"level":"warn","ts":"2025-12-13T09:32:56.298487Z","caller":"embed/serve.go:245","msg":"stopping secure grpc server due to error","error":"accept tcp 192.168.39.5:2379: use of closed network connection"}
	{"level":"warn","ts":"2025-12-13T09:32:56.298533Z","caller":"embed/serve.go:247","msg":"stopped secure grpc server due to error","error":"accept tcp 192.168.39.5:2379: use of closed network connection"}
	{"level":"error","ts":"2025-12-13T09:32:56.298541Z","caller":"embed/etcd.go:912","msg":"setting up serving from embedded etcd failed.","error":"accept tcp 192.168.39.5:2379: use of closed network connection","stacktrace":"go.etcd.io/etcd/server/v3/embed.(*Etcd).errHandler\n\tgo.etcd.io/etcd/server/v3/embed/etcd.go:912\ngo.etcd.io/etcd/server/v3/embed.(*Etcd).startHandler.func1\n\tgo.etcd.io/etcd/server/v3/embed/etcd.go:906"}
	{"level":"warn","ts":"2025-12-13T09:32:56.300679Z","caller":"embed/serve.go:245","msg":"stopping secure grpc server due to error","error":"accept tcp 127.0.0.1:2379: use of closed network connection"}
	{"level":"warn","ts":"2025-12-13T09:32:56.300980Z","caller":"embed/serve.go:247","msg":"stopped secure grpc server due to error","error":"accept tcp 127.0.0.1:2379: use of closed network connection"}
	{"level":"error","ts":"2025-12-13T09:32:56.301169Z","caller":"embed/etcd.go:912","msg":"setting up serving from embedded etcd failed.","error":"accept tcp 127.0.0.1:2379: use of closed network connection","stacktrace":"go.etcd.io/etcd/server/v3/embed.(*Etcd).errHandler\n\tgo.etcd.io/etcd/server/v3/embed/etcd.go:912\ngo.etcd.io/etcd/server/v3/embed.(*Etcd).startHandler.func1\n\tgo.etcd.io/etcd/server/v3/embed/etcd.go:906"}
	{"level":"info","ts":"2025-12-13T09:32:56.471124Z","caller":"embed/etcd.go:621","msg":"stopping serving peer traffic","address":"192.168.39.5:2380"}
	{"level":"error","ts":"2025-12-13T09:32:56.471216Z","caller":"embed/etcd.go:912","msg":"setting up serving from embedded etcd failed.","error":"accept tcp 192.168.39.5:2380: use of closed network connection","stacktrace":"go.etcd.io/etcd/server/v3/embed.(*Etcd).errHandler\n\tgo.etcd.io/etcd/server/v3/embed/etcd.go:912\ngo.etcd.io/etcd/server/v3/embed.(*Etcd).startHandler.func1\n\tgo.etcd.io/etcd/server/v3/embed/etcd.go:906"}
	{"level":"info","ts":"2025-12-13T09:32:56.471279Z","caller":"embed/etcd.go:626","msg":"stopped serving peer traffic","address":"192.168.39.5:2380"}
	{"level":"info","ts":"2025-12-13T09:32:56.471291Z","caller":"embed/etcd.go:428","msg":"closed etcd server","name":"embed-certs-594077","data-dir":"/var/lib/minikube/etcd","advertise-peer-urls":["https://192.168.39.5:2380"],"advertise-client-urls":["https://192.168.39.5:2379"]}
	
	
	==> kernel <==
	 09:34:52 up 1 min,  0 users,  load average: 1.24, 0.63, 0.24
	Linux embed-certs-594077 6.6.95 #1 SMP PREEMPT_DYNAMIC Thu Dec 11 23:11:39 UTC 2025 x86_64 GNU/Linux
	PRETTY_NAME="Buildroot 2025.02"
	
	
	==> kube-apiserver [6e6c8e89a43c] <==
	W1213 09:32:58.579780       1 logging.go:55] [core] [Channel #179 SubChannel #181]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1213 09:32:58.586497       1 logging.go:55] [core] [Channel #251 SubChannel #253]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1213 09:32:58.612833       1 logging.go:55] [core] [Channel #155 SubChannel #157]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1213 09:32:58.628739       1 logging.go:55] [core] [Channel #59 SubChannel #61]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1213 09:32:58.634886       1 logging.go:55] [core] [Channel #131 SubChannel #133]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1213 09:32:58.673263       1 logging.go:55] [core] [Channel #235 SubChannel #237]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1213 09:32:58.687212       1 logging.go:55] [core] [Channel #231 SubChannel #233]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1213 09:32:58.757751       1 logging.go:55] [core] [Channel #175 SubChannel #177]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1213 09:32:58.783556       1 logging.go:55] [core] [Channel #119 SubChannel #121]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1213 09:32:58.793183       1 logging.go:55] [core] [Channel #95 SubChannel #97]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1213 09:32:58.801878       1 logging.go:55] [core] [Channel #75 SubChannel #77]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1213 09:32:58.893696       1 logging.go:55] [core] [Channel #103 SubChannel #105]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1213 09:32:58.903550       1 logging.go:55] [core] [Channel #195 SubChannel #197]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1213 09:32:58.951767       1 logging.go:55] [core] [Channel #67 SubChannel #69]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1213 09:32:59.041688       1 logging.go:55] [core] [Channel #83 SubChannel #85]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1213 09:32:59.050000       1 logging.go:55] [core] [Channel #115 SubChannel #117]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1213 09:32:59.059710       1 logging.go:55] [core] [Channel #87 SubChannel #89]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1213 09:32:59.112782       1 logging.go:55] [core] [Channel #163 SubChannel #165]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1213 09:32:59.136210       1 logging.go:55] [core] [Channel #35 SubChannel #37]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1213 09:32:59.177829       1 logging.go:55] [core] [Channel #139 SubChannel #141]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1213 09:32:59.229503       1 logging.go:55] [core] [Channel #151 SubChannel #153]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1213 09:32:59.241149       1 logging.go:55] [core] [Channel #255 SubChannel #257]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1213 09:32:59.252494       1 logging.go:55] [core] [Channel #167 SubChannel #169]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1213 09:32:59.257821       1 logging.go:55] [core] [Channel #71 SubChannel #73]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1213 09:32:59.277534       1 logging.go:55] [core] [Channel #247 SubChannel #249]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	
	
	==> kube-apiserver [ea6f4d67228a] <==
		loading OpenAPI spec for "v1beta1.metrics.k8s.io" failed with: failed to download v1beta1.metrics.k8s.io: failed to retrieve openAPI spec, http error: ResponseCode: 503, Body: service unavailable
		, Header: map[Content-Type:[text/plain; charset=utf-8] X-Content-Type-Options:[nosniff]]
	 > logger="UnhandledError"
	I1213 09:33:45.920199       1 controller.go:109] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	I1213 09:33:48.019787       1 handler.go:285] Adding GroupVersion metrics.k8s.io v1beta1 to ResourceManager
	W1213 09:33:48.110189       1 lease.go:265] Resetting endpoints for master service "kubernetes" to [192.168.39.5]
	I1213 09:33:48.112210       1 controller.go:667] quota admission added evaluator for: endpoints
	I1213 09:33:48.682383       1 controller.go:667] quota admission added evaluator for: deployments.apps
	I1213 09:33:48.768099       1 controller.go:667] quota admission added evaluator for: daemonsets.apps
	I1213 09:33:48.837638       1 controller.go:667] quota admission added evaluator for: roles.rbac.authorization.k8s.io
	I1213 09:33:48.853781       1 controller.go:667] quota admission added evaluator for: rolebindings.rbac.authorization.k8s.io
	I1213 09:33:50.490654       1 controller.go:667] quota admission added evaluator for: replicasets.apps
	I1213 09:33:50.491397       1 controller.go:667] quota admission added evaluator for: endpointslices.discovery.k8s.io
	I1213 09:33:51.267285       1 controller.go:667] quota admission added evaluator for: namespaces
	I1213 09:33:52.310927       1 alloc.go:328] "allocated clusterIPs" service="kubernetes-dashboard/kubernetes-dashboard" clusterIPs={"IPv4":"10.97.138.136"}
	I1213 09:33:52.369329       1 alloc.go:328] "allocated clusterIPs" service="kubernetes-dashboard/dashboard-metrics-scraper" clusterIPs={"IPv4":"10.104.120.51"}
	W1213 09:34:49.882399       1 handler_proxy.go:99] no RequestInfo found in the context
	E1213 09:34:49.882606       1 controller.go:102] "Unhandled Error" err=<
		loading OpenAPI spec for "v1beta1.metrics.k8s.io" failed with: failed to download v1beta1.metrics.k8s.io: failed to retrieve openAPI spec, http error: ResponseCode: 503, Body: service unavailable
		, Header: map[Content-Type:[text/plain; charset=utf-8] X-Content-Type-Options:[nosniff]]
	 > logger="UnhandledError"
	I1213 09:34:49.882638       1 controller.go:109] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	W1213 09:34:49.888307       1 handler_proxy.go:99] no RequestInfo found in the context
	E1213 09:34:49.888364       1 controller.go:113] "Unhandled Error" err="loading OpenAPI spec for \"v1beta1.metrics.k8s.io\" failed with: Error, could not get list of group versions for APIService" logger="UnhandledError"
	I1213 09:34:49.888377       1 controller.go:126] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	
	
	==> kube-controller-manager [4b2a5a8f531e] <==
	I1213 09:31:53.336039       1 range_allocator.go:183] "Starting range CIDR allocator" logger="node-ipam-controller"
	I1213 09:31:53.336048       1 shared_informer.go:349] "Waiting for caches to sync" controller="cidrallocator"
	I1213 09:31:53.336056       1 shared_informer.go:356] "Caches are synced" controller="cidrallocator"
	I1213 09:31:53.343453       1 shared_informer.go:356] "Caches are synced" controller="service account"
	I1213 09:31:53.353435       1 shared_informer.go:356] "Caches are synced" controller="validatingadmissionpolicy-status"
	I1213 09:31:53.355442       1 shared_informer.go:356] "Caches are synced" controller="stateful set"
	I1213 09:31:53.355525       1 shared_informer.go:356] "Caches are synced" controller="bootstrap_signer"
	I1213 09:31:53.357255       1 shared_informer.go:356] "Caches are synced" controller="GC"
	I1213 09:31:53.358433       1 range_allocator.go:428] "Set node PodCIDR" logger="node-ipam-controller" node="embed-certs-594077" podCIDRs=["10.244.0.0/24"]
	I1213 09:31:53.360081       1 shared_informer.go:356] "Caches are synced" controller="persistent volume"
	I1213 09:31:53.360496       1 shared_informer.go:356] "Caches are synced" controller="PVC protection"
	I1213 09:31:53.358818       1 shared_informer.go:356] "Caches are synced" controller="resource_claim"
	I1213 09:31:53.361073       1 shared_informer.go:356] "Caches are synced" controller="namespace"
	I1213 09:31:53.361384       1 shared_informer.go:356] "Caches are synced" controller="expand"
	I1213 09:31:53.358828       1 shared_informer.go:356] "Caches are synced" controller="service-cidr-controller"
	I1213 09:31:53.361985       1 shared_informer.go:356] "Caches are synced" controller="attach detach"
	I1213 09:31:53.362214       1 shared_informer.go:356] "Caches are synced" controller="crt configmap"
	I1213 09:31:53.363737       1 shared_informer.go:356] "Caches are synced" controller="garbage collector"
	I1213 09:31:53.363994       1 shared_informer.go:356] "Caches are synced" controller="resource quota"
	I1213 09:31:53.364667       1 shared_informer.go:356] "Caches are synced" controller="ReplicaSet"
	I1213 09:31:53.370811       1 shared_informer.go:356] "Caches are synced" controller="endpoint_slice"
	I1213 09:31:53.370831       1 shared_informer.go:356] "Caches are synced" controller="garbage collector"
	I1213 09:31:53.370965       1 garbagecollector.go:154] "Garbage collector: all resource monitors have synced" logger="garbage-collector-controller"
	I1213 09:31:53.370971       1 garbagecollector.go:157] "Proceeding to collect garbage" logger="garbage-collector-controller"
	I1213 09:31:53.387013       1 shared_informer.go:356] "Caches are synced" controller="PV protection"
	
	
	==> kube-controller-manager [bcf2fd041677] <==
	I1213 09:33:50.374959       1 shared_informer.go:356] "Caches are synced" controller="certificate-csrsigning-legacy-unknown"
	I1213 09:33:50.374993       1 shared_informer.go:356] "Caches are synced" controller="certificate-csrsigning-kube-apiserver-client"
	I1213 09:33:50.375011       1 shared_informer.go:356] "Caches are synced" controller="ClusterRoleAggregator"
	I1213 09:33:50.375874       1 shared_informer.go:356] "Caches are synced" controller="PVC protection"
	I1213 09:33:50.403422       1 shared_informer.go:356] "Caches are synced" controller="attach detach"
	I1213 09:33:50.396647       1 shared_informer.go:356] "Caches are synced" controller="PV protection"
	I1213 09:33:50.396659       1 shared_informer.go:356] "Caches are synced" controller="endpoint_slice_mirroring"
	I1213 09:33:50.420341       1 shared_informer.go:356] "Caches are synced" controller="stateful set"
	I1213 09:33:50.435314       1 shared_informer.go:356] "Caches are synced" controller="resource_claim"
	I1213 09:33:50.442414       1 shared_informer.go:356] "Caches are synced" controller="garbage collector"
	I1213 09:33:50.443866       1 shared_informer.go:356] "Caches are synced" controller="endpoint"
	I1213 09:33:50.450389       1 shared_informer.go:356] "Caches are synced" controller="garbage collector"
	I1213 09:33:50.450482       1 garbagecollector.go:154] "Garbage collector: all resource monitors have synced" logger="garbage-collector-controller"
	I1213 09:33:50.450491       1 garbagecollector.go:157] "Proceeding to collect garbage" logger="garbage-collector-controller"
	E1213 09:33:51.616002       1 replica_set.go:587] "Unhandled Error" err="sync \"kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9\" failed with pods \"dashboard-metrics-scraper-6ffb444bf9-\" is forbidden: error looking up service account kubernetes-dashboard/kubernetes-dashboard: serviceaccount \"kubernetes-dashboard\" not found" logger="UnhandledError"
	E1213 09:33:51.697375       1 replica_set.go:587] "Unhandled Error" err="sync \"kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9\" failed with pods \"dashboard-metrics-scraper-6ffb444bf9-\" is forbidden: error looking up service account kubernetes-dashboard/kubernetes-dashboard: serviceaccount \"kubernetes-dashboard\" not found" logger="UnhandledError"
	E1213 09:33:51.734843       1 replica_set.go:587] "Unhandled Error" err="sync \"kubernetes-dashboard/kubernetes-dashboard-855c9754f9\" failed with pods \"kubernetes-dashboard-855c9754f9-\" is forbidden: error looking up service account kubernetes-dashboard/kubernetes-dashboard: serviceaccount \"kubernetes-dashboard\" not found" logger="UnhandledError"
	E1213 09:33:51.784632       1 replica_set.go:587] "Unhandled Error" err="sync \"kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9\" failed with pods \"dashboard-metrics-scraper-6ffb444bf9-\" is forbidden: error looking up service account kubernetes-dashboard/kubernetes-dashboard: serviceaccount \"kubernetes-dashboard\" not found" logger="UnhandledError"
	E1213 09:33:51.793970       1 replica_set.go:587] "Unhandled Error" err="sync \"kubernetes-dashboard/kubernetes-dashboard-855c9754f9\" failed with pods \"kubernetes-dashboard-855c9754f9-\" is forbidden: error looking up service account kubernetes-dashboard/kubernetes-dashboard: serviceaccount \"kubernetes-dashboard\" not found" logger="UnhandledError"
	E1213 09:33:51.832688       1 replica_set.go:587] "Unhandled Error" err="sync \"kubernetes-dashboard/kubernetes-dashboard-855c9754f9\" failed with pods \"kubernetes-dashboard-855c9754f9-\" is forbidden: error looking up service account kubernetes-dashboard/kubernetes-dashboard: serviceaccount \"kubernetes-dashboard\" not found" logger="UnhandledError"
	E1213 09:33:51.832688       1 replica_set.go:587] "Unhandled Error" err="sync \"kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9\" failed with pods \"dashboard-metrics-scraper-6ffb444bf9-\" is forbidden: error looking up service account kubernetes-dashboard/kubernetes-dashboard: serviceaccount \"kubernetes-dashboard\" not found" logger="UnhandledError"
	E1213 09:33:51.851357       1 replica_set.go:587] "Unhandled Error" err="sync \"kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9\" failed with pods \"dashboard-metrics-scraper-6ffb444bf9-\" is forbidden: error looking up service account kubernetes-dashboard/kubernetes-dashboard: serviceaccount \"kubernetes-dashboard\" not found" logger="UnhandledError"
	E1213 09:33:51.871143       1 replica_set.go:587] "Unhandled Error" err="sync \"kubernetes-dashboard/kubernetes-dashboard-855c9754f9\" failed with pods \"kubernetes-dashboard-855c9754f9-\" is forbidden: error looking up service account kubernetes-dashboard/kubernetes-dashboard: serviceaccount \"kubernetes-dashboard\" not found" logger="UnhandledError"
	E1213 09:34:50.051560       1 resource_quota_controller.go:446] "Unhandled Error" err="unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1" logger="UnhandledError"
	I1213 09:34:50.064726       1 garbagecollector.go:787] "failed to discover some groups" logger="garbage-collector-controller" groups="map[\"metrics.k8s.io/v1beta1\":\"stale GroupVersion discovery: metrics.k8s.io/v1beta1\"]"
	
	
	==> kube-proxy [08fadc68f466] <==
	I1213 09:31:56.508637       1 shared_informer.go:349] "Waiting for caches to sync" controller="node informer cache"
	I1213 09:31:56.609891       1 shared_informer.go:356] "Caches are synced" controller="node informer cache"
	I1213 09:31:56.609953       1 server.go:219] "Successfully retrieved NodeIPs" NodeIPs=["192.168.39.5"]
	E1213 09:31:56.610205       1 server.go:256] "Kube-proxy configuration may be incomplete or incorrect" err="nodePortAddresses is unset; NodePort connections will be accepted on all local IPs. Consider using `--nodeport-addresses primary`"
	I1213 09:31:56.804733       1 server_linux.go:103] "No iptables support for family" ipFamily="IPv6" error=<
		error listing chain "POSTROUTING" in table "nat": exit status 3: ip6tables v1.8.9 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
		Perhaps ip6tables or your kernel needs to be upgraded.
	 >
	I1213 09:31:56.804847       1 server.go:267] "kube-proxy running in single-stack mode" ipFamily="IPv4"
	I1213 09:31:56.804896       1 server_linux.go:132] "Using iptables Proxier"
	I1213 09:31:56.865819       1 proxier.go:242] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses" ipFamily="IPv4"
	I1213 09:31:56.878175       1 server.go:527] "Version info" version="v1.34.2"
	I1213 09:31:56.879530       1 server.go:529] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1213 09:31:56.898538       1 config.go:200] "Starting service config controller"
	I1213 09:31:56.898916       1 shared_informer.go:349] "Waiting for caches to sync" controller="service config"
	I1213 09:31:56.899076       1 config.go:106] "Starting endpoint slice config controller"
	I1213 09:31:56.899279       1 shared_informer.go:349] "Waiting for caches to sync" controller="endpoint slice config"
	I1213 09:31:56.899942       1 config.go:309] "Starting node config controller"
	I1213 09:31:56.900469       1 shared_informer.go:349] "Waiting for caches to sync" controller="node config"
	I1213 09:31:56.900662       1 shared_informer.go:356] "Caches are synced" controller="node config"
	I1213 09:31:56.906769       1 config.go:403] "Starting serviceCIDR config controller"
	I1213 09:31:56.908443       1 shared_informer.go:349] "Waiting for caches to sync" controller="serviceCIDR config"
	I1213 09:31:57.000158       1 shared_informer.go:356] "Caches are synced" controller="service config"
	I1213 09:31:57.001827       1 shared_informer.go:356] "Caches are synced" controller="endpoint slice config"
	I1213 09:31:57.009299       1 shared_informer.go:356] "Caches are synced" controller="serviceCIDR config"
	
	
	==> kube-proxy [3b9abac9a0e5] <==
	I1213 09:33:47.146798       1 shared_informer.go:349] "Waiting for caches to sync" controller="node informer cache"
	I1213 09:33:47.248054       1 shared_informer.go:356] "Caches are synced" controller="node informer cache"
	I1213 09:33:47.248124       1 server.go:219] "Successfully retrieved NodeIPs" NodeIPs=["192.168.39.5"]
	E1213 09:33:47.248660       1 server.go:256] "Kube-proxy configuration may be incomplete or incorrect" err="nodePortAddresses is unset; NodePort connections will be accepted on all local IPs. Consider using `--nodeport-addresses primary`"
	I1213 09:33:47.305149       1 server_linux.go:103] "No iptables support for family" ipFamily="IPv6" error=<
		error listing chain "POSTROUTING" in table "nat": exit status 3: ip6tables v1.8.9 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
		Perhaps ip6tables or your kernel needs to be upgraded.
	 >
	I1213 09:33:47.305245       1 server.go:267] "kube-proxy running in single-stack mode" ipFamily="IPv4"
	I1213 09:33:47.305313       1 server_linux.go:132] "Using iptables Proxier"
	I1213 09:33:47.321084       1 proxier.go:242] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses" ipFamily="IPv4"
	I1213 09:33:47.321935       1 server.go:527] "Version info" version="v1.34.2"
	I1213 09:33:47.321978       1 server.go:529] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1213 09:33:47.329818       1 config.go:309] "Starting node config controller"
	I1213 09:33:47.329864       1 shared_informer.go:349] "Waiting for caches to sync" controller="node config"
	I1213 09:33:47.329872       1 shared_informer.go:356] "Caches are synced" controller="node config"
	I1213 09:33:47.330417       1 config.go:200] "Starting service config controller"
	I1213 09:33:47.330506       1 shared_informer.go:349] "Waiting for caches to sync" controller="service config"
	I1213 09:33:47.330530       1 config.go:106] "Starting endpoint slice config controller"
	I1213 09:33:47.330533       1 shared_informer.go:349] "Waiting for caches to sync" controller="endpoint slice config"
	I1213 09:33:47.330543       1 config.go:403] "Starting serviceCIDR config controller"
	I1213 09:33:47.330546       1 shared_informer.go:349] "Waiting for caches to sync" controller="serviceCIDR config"
	I1213 09:33:47.431478       1 shared_informer.go:356] "Caches are synced" controller="serviceCIDR config"
	I1213 09:33:47.431478       1 shared_informer.go:356] "Caches are synced" controller="service config"
	I1213 09:33:47.431538       1 shared_informer.go:356] "Caches are synced" controller="endpoint slice config"
	
	
	==> kube-scheduler [8cac4fb32902] <==
	I1213 09:33:44.776239       1 server.go:177] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1213 09:33:44.788584       1 configmap_cafile_content.go:205] "Starting controller" name="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I1213 09:33:44.789576       1 shared_informer.go:349] "Waiting for caches to sync" controller="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I1213 09:33:44.793480       1 secure_serving.go:211] Serving securely on 127.0.0.1:10259
	I1213 09:33:44.793495       1 tlsconfig.go:243] "Starting DynamicServingCertificateController"
	E1213 09:33:44.834891       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Namespace: namespaces is forbidden: User \"system:kube-scheduler\" cannot list resource \"namespaces\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Namespace"
	E1213 09:33:44.835669       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User \"system:kube-scheduler\" cannot list resource \"poddisruptionbudgets\" in API group \"policy\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.PodDisruptionBudget"
	E1213 09:33:44.836791       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Node: nodes is forbidden: User \"system:kube-scheduler\" cannot list resource \"nodes\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Node"
	E1213 09:33:44.836887       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Pod: pods is forbidden: User \"system:kube-scheduler\" cannot list resource \"pods\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Pod"
	E1213 09:33:44.836961       1 reflector.go:205] "Failed to watch" err="failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csinodes\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSINode"
	E1213 09:33:44.837040       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ResourceClaim: resourceclaims.resource.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"resourceclaims\" in API group \"resource.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ResourceClaim"
	E1213 09:33:44.837096       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ResourceSlice: resourceslices.resource.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"resourceslices\" in API group \"resource.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ResourceSlice"
	E1213 09:33:44.837155       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User \"system:kube-scheduler\" cannot list resource \"persistentvolumes\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.PersistentVolume"
	E1213 09:33:44.837891       1 reflector.go:205] "Failed to watch" err="failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csidrivers\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIDriver"
	E1213 09:33:44.838140       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ConfigMap: configmaps \"extension-apiserver-authentication\" is forbidden: User \"system:kube-scheduler\" cannot list resource \"configmaps\" in API group \"\" in the namespace \"kube-system\"" logger="UnhandledError" reflector="runtime/asm_amd64.s:1700" type="*v1.ConfigMap"
	E1213 09:33:44.838370       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User \"system:kube-scheduler\" cannot list resource \"persistentvolumeclaims\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.PersistentVolumeClaim"
	E1213 09:33:44.838427       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User \"system:kube-scheduler\" cannot list resource \"replicationcontrollers\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ReplicationController"
	E1213 09:33:44.838516       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User \"system:kube-scheduler\" cannot list resource \"replicasets\" in API group \"apps\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ReplicaSet"
	E1213 09:33:44.838546       1 reflector.go:205] "Failed to watch" err="failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"storageclasses\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.StorageClass"
	E1213 09:33:44.838598       1 reflector.go:205] "Failed to watch" err="failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csistoragecapacities\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIStorageCapacity"
	E1213 09:33:44.840424       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Service: services is forbidden: User \"system:kube-scheduler\" cannot list resource \"services\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Service"
	E1213 09:33:44.840886       1 reflector.go:205] "Failed to watch" err="failed to list *v1.VolumeAttachment: volumeattachments.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"volumeattachments\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.VolumeAttachment"
	E1213 09:33:44.842576       1 reflector.go:205] "Failed to watch" err="failed to list *v1.DeviceClass: deviceclasses.resource.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"deviceclasses\" in API group \"resource.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.DeviceClass"
	E1213 09:33:44.842610       1 reflector.go:205] "Failed to watch" err="failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User \"system:kube-scheduler\" cannot list resource \"statefulsets\" in API group \"apps\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.StatefulSet"
	I1213 09:33:46.493666       1 shared_informer.go:356] "Caches are synced" controller="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	
	
	==> kube-scheduler [cf9e0b0dcbf9] <==
	E1213 09:31:45.673059       1 reflector.go:205] "Failed to watch" err="failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csidrivers\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIDriver"
	E1213 09:31:45.672922       1 reflector.go:205] "Failed to watch" err="failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User \"system:kube-scheduler\" cannot list resource \"statefulsets\" in API group \"apps\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.StatefulSet"
	E1213 09:31:45.674367       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User \"system:kube-scheduler\" cannot list resource \"persistentvolumes\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.PersistentVolume"
	E1213 09:31:45.674656       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User \"system:kube-scheduler\" cannot list resource \"replicasets\" in API group \"apps\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ReplicaSet"
	E1213 09:31:45.675121       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User \"system:kube-scheduler\" cannot list resource \"poddisruptionbudgets\" in API group \"policy\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.PodDisruptionBudget"
	E1213 09:31:45.675359       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Node: nodes is forbidden: User \"system:kube-scheduler\" cannot list resource \"nodes\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Node"
	E1213 09:31:46.593861       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User \"system:kube-scheduler\" cannot list resource \"replicationcontrollers\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ReplicationController"
	E1213 09:31:46.620372       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Namespace: namespaces is forbidden: User \"system:kube-scheduler\" cannot list resource \"namespaces\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Namespace"
	E1213 09:31:46.637400       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User \"system:kube-scheduler\" cannot list resource \"persistentvolumes\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.PersistentVolume"
	E1213 09:31:46.671885       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User \"system:kube-scheduler\" cannot list resource \"replicasets\" in API group \"apps\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ReplicaSet"
	E1213 09:31:46.675254       1 reflector.go:205] "Failed to watch" err="failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User \"system:kube-scheduler\" cannot list resource \"statefulsets\" in API group \"apps\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.StatefulSet"
	E1213 09:31:46.679901       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ResourceClaim: resourceclaims.resource.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"resourceclaims\" in API group \"resource.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ResourceClaim"
	E1213 09:31:46.741636       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Node: nodes is forbidden: User \"system:kube-scheduler\" cannot list resource \"nodes\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Node"
	E1213 09:31:46.785658       1 reflector.go:205] "Failed to watch" err="failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"storageclasses\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.StorageClass"
	E1213 09:31:46.807123       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Service: services is forbidden: User \"system:kube-scheduler\" cannot list resource \"services\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Service"
	E1213 09:31:46.817797       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User \"system:kube-scheduler\" cannot list resource \"persistentvolumeclaims\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.PersistentVolumeClaim"
	E1213 09:31:46.982287       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ConfigMap: configmaps \"extension-apiserver-authentication\" is forbidden: User \"system:kube-scheduler\" cannot list resource \"configmaps\" in API group \"\" in the namespace \"kube-system\"" logger="UnhandledError" reflector="runtime/asm_amd64.s:1700" type="*v1.ConfigMap"
	E1213 09:31:47.004930       1 reflector.go:205] "Failed to watch" err="failed to list *v1.DeviceClass: deviceclasses.resource.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"deviceclasses\" in API group \"resource.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.DeviceClass"
	E1213 09:31:47.032549       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Pod: pods is forbidden: User \"system:kube-scheduler\" cannot list resource \"pods\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Pod"
	I1213 09:31:50.148815       1 shared_informer.go:356] "Caches are synced" controller="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I1213 09:32:49.158251       1 secure_serving.go:259] Stopped listening on 127.0.0.1:10259
	I1213 09:32:49.158342       1 server.go:263] "[graceful-termination] secure server has stopped listening"
	I1213 09:32:49.158389       1 tlsconfig.go:258] "Shutting down DynamicServingCertificateController"
	I1213 09:32:49.158466       1 server.go:265] "[graceful-termination] secure server is exiting"
	E1213 09:32:49.158499       1 run.go:72] "command failed" err="finished without leader elect"
	
	
	==> kubelet <==
	Dec 13 09:34:51 embed-certs-594077 kubelet[4246]: I1213 09:34:51.476932    4246 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/e72f6729e76bef19c11e9f77a5bbfed1-ca-certs\") pod \"kube-controller-manager-embed-certs-594077\" (UID: \"e72f6729e76bef19c11e9f77a5bbfed1\") " pod="kube-system/kube-controller-manager-embed-certs-594077"
	Dec 13 09:34:51 embed-certs-594077 kubelet[4246]: I1213 09:34:51.476983    4246 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/e72f6729e76bef19c11e9f77a5bbfed1-flexvolume-dir\") pod \"kube-controller-manager-embed-certs-594077\" (UID: \"e72f6729e76bef19c11e9f77a5bbfed1\") " pod="kube-system/kube-controller-manager-embed-certs-594077"
	Dec 13 09:34:51 embed-certs-594077 kubelet[4246]: I1213 09:34:51.477010    4246 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/2791db0168e112ad1ebb49d47ad7acc4-kubeconfig\") pod \"kube-scheduler-embed-certs-594077\" (UID: \"2791db0168e112ad1ebb49d47ad7acc4\") " pod="kube-system/kube-scheduler-embed-certs-594077"
	Dec 13 09:34:51 embed-certs-594077 kubelet[4246]: I1213 09:34:51.477028    4246 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etcd-certs\" (UniqueName: \"kubernetes.io/host-path/37016cb987c6e71339040b3541624dd3-etcd-certs\") pod \"etcd-embed-certs-594077\" (UID: \"37016cb987c6e71339040b3541624dd3\") " pod="kube-system/etcd-embed-certs-594077"
	Dec 13 09:34:51 embed-certs-594077 kubelet[4246]: I1213 09:34:51.477043    4246 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/fab6b178c44aca2623e51c535287727e-k8s-certs\") pod \"kube-apiserver-embed-certs-594077\" (UID: \"fab6b178c44aca2623e51c535287727e\") " pod="kube-system/kube-apiserver-embed-certs-594077"
	Dec 13 09:34:51 embed-certs-594077 kubelet[4246]: I1213 09:34:51.477057    4246 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/fab6b178c44aca2623e51c535287727e-usr-share-ca-certificates\") pod \"kube-apiserver-embed-certs-594077\" (UID: \"fab6b178c44aca2623e51c535287727e\") " pod="kube-system/kube-apiserver-embed-certs-594077"
	Dec 13 09:34:51 embed-certs-594077 kubelet[4246]: I1213 09:34:51.477075    4246 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/e72f6729e76bef19c11e9f77a5bbfed1-k8s-certs\") pod \"kube-controller-manager-embed-certs-594077\" (UID: \"e72f6729e76bef19c11e9f77a5bbfed1\") " pod="kube-system/kube-controller-manager-embed-certs-594077"
	Dec 13 09:34:51 embed-certs-594077 kubelet[4246]: I1213 09:34:51.477090    4246 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/e72f6729e76bef19c11e9f77a5bbfed1-kubeconfig\") pod \"kube-controller-manager-embed-certs-594077\" (UID: \"e72f6729e76bef19c11e9f77a5bbfed1\") " pod="kube-system/kube-controller-manager-embed-certs-594077"
	Dec 13 09:34:51 embed-certs-594077 kubelet[4246]: I1213 09:34:51.477105    4246 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/e72f6729e76bef19c11e9f77a5bbfed1-usr-share-ca-certificates\") pod \"kube-controller-manager-embed-certs-594077\" (UID: \"e72f6729e76bef19c11e9f77a5bbfed1\") " pod="kube-system/kube-controller-manager-embed-certs-594077"
	Dec 13 09:34:51 embed-certs-594077 kubelet[4246]: I1213 09:34:51.477118    4246 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etcd-data\" (UniqueName: \"kubernetes.io/host-path/37016cb987c6e71339040b3541624dd3-etcd-data\") pod \"etcd-embed-certs-594077\" (UID: \"37016cb987c6e71339040b3541624dd3\") " pod="kube-system/etcd-embed-certs-594077"
	Dec 13 09:34:51 embed-certs-594077 kubelet[4246]: I1213 09:34:51.477149    4246 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/fab6b178c44aca2623e51c535287727e-ca-certs\") pod \"kube-apiserver-embed-certs-594077\" (UID: \"fab6b178c44aca2623e51c535287727e\") " pod="kube-system/kube-apiserver-embed-certs-594077"
	Dec 13 09:34:51 embed-certs-594077 kubelet[4246]: I1213 09:34:51.552903    4246 apiserver.go:52] "Watching apiserver"
	Dec 13 09:34:51 embed-certs-594077 kubelet[4246]: I1213 09:34:51.598581    4246 desired_state_of_world_populator.go:154] "Finished populating initial desired state of world"
	Dec 13 09:34:51 embed-certs-594077 kubelet[4246]: I1213 09:34:51.678949    4246 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/c24123e2-281a-4e07-83eb-bf2a70ed9689-lib-modules\") pod \"kube-proxy-gbh4v\" (UID: \"c24123e2-281a-4e07-83eb-bf2a70ed9689\") " pod="kube-system/kube-proxy-gbh4v"
	Dec 13 09:34:51 embed-certs-594077 kubelet[4246]: I1213 09:34:51.679096    4246 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/c24123e2-281a-4e07-83eb-bf2a70ed9689-xtables-lock\") pod \"kube-proxy-gbh4v\" (UID: \"c24123e2-281a-4e07-83eb-bf2a70ed9689\") " pod="kube-system/kube-proxy-gbh4v"
	Dec 13 09:34:51 embed-certs-594077 kubelet[4246]: I1213 09:34:51.679116    4246 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tmp\" (UniqueName: \"kubernetes.io/host-path/1db9cb1e-bc7a-4d9f-9042-936fcad750f7-tmp\") pod \"storage-provisioner\" (UID: \"1db9cb1e-bc7a-4d9f-9042-936fcad750f7\") " pod="kube-system/storage-provisioner"
	Dec 13 09:34:51 embed-certs-594077 kubelet[4246]: I1213 09:34:51.864587    4246 scope.go:117] "RemoveContainer" containerID="de05857e10ed14d338591b8d140c8fdbffcc13e5cdf3dc4d04b3f6eabfd47af5"
	Dec 13 09:34:52 embed-certs-594077 kubelet[4246]: E1213 09:34:52.113944    4246 log.go:32] "PullImage from image service failed" err="rpc error: code = Unknown desc = Docker Image Format v1 and Docker Image manifest version 2, schema 1 support has been removed. Suggest the author of registry.k8s.io/echoserver:1.4 to upgrade the image to the OCI Format or Docker Image manifest v2, schema 2. More information at https://docs.docker.com/go/deprecated-image-specs/" image="registry.k8s.io/echoserver:1.4"
	Dec 13 09:34:52 embed-certs-594077 kubelet[4246]: E1213 09:34:52.114004    4246 kuberuntime_image.go:43] "Failed to pull image" err="Docker Image Format v1 and Docker Image manifest version 2, schema 1 support has been removed. Suggest the author of registry.k8s.io/echoserver:1.4 to upgrade the image to the OCI Format or Docker Image manifest v2, schema 2. More information at https://docs.docker.com/go/deprecated-image-specs/" image="registry.k8s.io/echoserver:1.4"
	Dec 13 09:34:52 embed-certs-594077 kubelet[4246]: E1213 09:34:52.114210    4246 kuberuntime_manager.go:1449] "Unhandled Error" err="container dashboard-metrics-scraper start failed in pod dashboard-metrics-scraper-6ffb444bf9-42zcv_kubernetes-dashboard(7123ef17-ca61-4aa4-a10e-b29ec51a6667): ErrImagePull: Docker Image Format v1 and Docker Image manifest version 2, schema 1 support has been removed. Suggest the author of registry.k8s.io/echoserver:1.4 to upgrade the image to the OCI Format or Docker Image manifest v2, schema 2. More information at https://docs.docker.com/go/deprecated-image-specs/" logger="UnhandledError"
	Dec 13 09:34:52 embed-certs-594077 kubelet[4246]: E1213 09:34:52.114247    4246 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with ErrImagePull: \"Docker Image Format v1 and Docker Image manifest version 2, schema 1 support has been removed. Suggest the author of registry.k8s.io/echoserver:1.4 to upgrade the image to the OCI Format or Docker Image manifest v2, schema 2. More information at https://docs.docker.com/go/deprecated-image-specs/\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-42zcv" podUID="7123ef17-ca61-4aa4-a10e-b29ec51a6667"
	Dec 13 09:34:52 embed-certs-594077 kubelet[4246]: E1213 09:34:52.153914    4246 log.go:32] "PullImage from image service failed" err="rpc error: code = Unknown desc = Error response from daemon: Get \"https://fake.domain/v2/\": dial tcp: lookup fake.domain on 192.168.122.1:53: no such host" image="fake.domain/registry.k8s.io/echoserver:1.4"
	Dec 13 09:34:52 embed-certs-594077 kubelet[4246]: E1213 09:34:52.154760    4246 kuberuntime_image.go:43] "Failed to pull image" err="Error response from daemon: Get \"https://fake.domain/v2/\": dial tcp: lookup fake.domain on 192.168.122.1:53: no such host" image="fake.domain/registry.k8s.io/echoserver:1.4"
	Dec 13 09:34:52 embed-certs-594077 kubelet[4246]: E1213 09:34:52.154958    4246 kuberuntime_manager.go:1449] "Unhandled Error" err="container metrics-server start failed in pod metrics-server-746fcd58dc-r9qzb_kube-system(25f7da03-5692-48c6-8b6e-22b84e1aec43): ErrImagePull: Error response from daemon: Get \"https://fake.domain/v2/\": dial tcp: lookup fake.domain on 192.168.122.1:53: no such host" logger="UnhandledError"
	Dec 13 09:34:52 embed-certs-594077 kubelet[4246]: E1213 09:34:52.155038    4246 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ErrImagePull: \"Error response from daemon: Get \\\"https://fake.domain/v2/\\\": dial tcp: lookup fake.domain on 192.168.122.1:53: no such host\"" pod="kube-system/metrics-server-746fcd58dc-r9qzb" podUID="25f7da03-5692-48c6-8b6e-22b84e1aec43"
	
	
	==> kubernetes-dashboard [9d744ae1656d] <==
	2025/12/13 09:34:04 Starting overwatch
	2025/12/13 09:34:04 Using namespace: kubernetes-dashboard
	2025/12/13 09:34:04 Using in-cluster config to connect to apiserver
	2025/12/13 09:34:04 Using secret token for csrf signing
	2025/12/13 09:34:04 Initializing csrf token from kubernetes-dashboard-csrf secret
	2025/12/13 09:34:04 Empty token. Generating and storing in a secret kubernetes-dashboard-csrf
	2025/12/13 09:34:04 Successful initial request to the apiserver, version: v1.34.2
	2025/12/13 09:34:04 Generating JWE encryption key
	2025/12/13 09:34:04 New synchronizer has been registered: kubernetes-dashboard-key-holder-kubernetes-dashboard. Starting
	2025/12/13 09:34:04 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kubernetes-dashboard
	2025/12/13 09:34:04 Initializing JWE encryption key from synchronized object
	2025/12/13 09:34:04 Creating in-cluster Sidecar client
	2025/12/13 09:34:04 Metric client health check failed: the server is currently unable to handle the request (get services dashboard-metrics-scraper). Retrying in 30 seconds.
	2025/12/13 09:34:04 Serving insecurely on HTTP port: 9090
	2025/12/13 09:34:50 Metric client health check failed: the server is currently unable to handle the request (get services dashboard-metrics-scraper). Retrying in 30 seconds.
	
	
	==> storage-provisioner [2f643a44c094] <==
	I1213 09:34:52.258732       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	I1213 09:34:52.299738       1 storage_provisioner.go:141] Storage provisioner initialized, now starting service!
	I1213 09:34:52.300284       1 leaderelection.go:243] attempting to acquire leader lease kube-system/k8s.io-minikube-hostpath...
	W1213 09:34:52.305497       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	
	
	==> storage-provisioner [de05857e10ed] <==
	I1213 09:33:46.911515       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	F1213 09:34:16.919520       1 main.go:39] error getting server version: Get "https://10.96.0.1:443/version?timeout=32s": dial tcp 10.96.0.1:443: i/o timeout
	

                                                
                                                
-- /stdout --
helpers_test.go:263: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p embed-certs-594077 -n embed-certs-594077
helpers_test.go:270: (dbg) Run:  kubectl --context embed-certs-594077 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:281: non-running pods: metrics-server-746fcd58dc-r9qzb dashboard-metrics-scraper-6ffb444bf9-42zcv
helpers_test.go:283: ======> post-mortem[TestStartStop/group/embed-certs/serial/Pause]: describe non-running pods <======
helpers_test.go:286: (dbg) Run:  kubectl --context embed-certs-594077 describe pod metrics-server-746fcd58dc-r9qzb dashboard-metrics-scraper-6ffb444bf9-42zcv
helpers_test.go:286: (dbg) Non-zero exit: kubectl --context embed-certs-594077 describe pod metrics-server-746fcd58dc-r9qzb dashboard-metrics-scraper-6ffb444bf9-42zcv: exit status 1 (66.721003ms)

                                                
                                                
** stderr ** 
	Error from server (NotFound): pods "metrics-server-746fcd58dc-r9qzb" not found
	Error from server (NotFound): pods "dashboard-metrics-scraper-6ffb444bf9-42zcv" not found

                                                
                                                
** /stderr **
helpers_test.go:288: kubectl --context embed-certs-594077 describe pod metrics-server-746fcd58dc-r9qzb dashboard-metrics-scraper-6ffb444bf9-42zcv: exit status 1
helpers_test.go:223: -----------------------post-mortem--------------------------------
helpers_test.go:224: ======>  post-mortem[TestStartStop/group/embed-certs/serial/Pause]: network settings <======
helpers_test.go:231: HOST ENV snapshots: PROXY env: HTTP_PROXY="<empty>" HTTPS_PROXY="<empty>" NO_PROXY="<empty>"
helpers_test.go:248: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p embed-certs-594077 -n embed-certs-594077
helpers_test.go:253: <<< TestStartStop/group/embed-certs/serial/Pause FAILED: start of post-mortem logs <<<
helpers_test.go:254: ======>  post-mortem[TestStartStop/group/embed-certs/serial/Pause]: minikube logs <======
helpers_test.go:256: (dbg) Run:  out/minikube-linux-amd64 -p embed-certs-594077 logs -n 25
helpers_test.go:256: (dbg) Done: out/minikube-linux-amd64 -p embed-certs-594077 logs -n 25: (1.308832085s)
helpers_test.go:261: TestStartStop/group/embed-certs/serial/Pause logs: 
-- stdout --
	
	==> Audit <==
	┌─────────┬───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┬──────────────────────────────┬─────────┬─────────┬─────────────────────┬─────────────────────┐
	│ COMMAND │                                                                                                           ARGS                                                                                                            │           PROFILE            │  USER   │ VERSION │     START TIME      │      END TIME       │
	├─────────┼───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┼──────────────────────────────┼─────────┼─────────┼─────────────────────┼─────────────────────┤
	│ addons  │ enable dashboard -p embed-certs-594077 --images=MetricsScraper=registry.k8s.io/echoserver:1.4                                                                                                                             │ embed-certs-594077           │ jenkins │ v1.37.0 │ 13 Dec 25 09:33 UTC │ 13 Dec 25 09:33 UTC │
	│ start   │ -p embed-certs-594077 --memory=3072 --alsologtostderr --wait=true --embed-certs --driver=kvm2  --kubernetes-version=v1.34.2                                                                                               │ embed-certs-594077           │ jenkins │ v1.37.0 │ 13 Dec 25 09:33 UTC │ 13 Dec 25 09:33 UTC │
	│ image   │ no-preload-616969 image list --format=json                                                                                                                                                                                │ no-preload-616969            │ jenkins │ v1.37.0 │ 13 Dec 25 09:33 UTC │ 13 Dec 25 09:33 UTC │
	│ pause   │ -p no-preload-616969 --alsologtostderr -v=1                                                                                                                                                                               │ no-preload-616969            │ jenkins │ v1.37.0 │ 13 Dec 25 09:33 UTC │ 13 Dec 25 09:33 UTC │
	│ unpause │ -p no-preload-616969 --alsologtostderr -v=1                                                                                                                                                                               │ no-preload-616969            │ jenkins │ v1.37.0 │ 13 Dec 25 09:33 UTC │ 13 Dec 25 09:33 UTC │
	│ delete  │ -p no-preload-616969                                                                                                                                                                                                      │ no-preload-616969            │ jenkins │ v1.37.0 │ 13 Dec 25 09:33 UTC │ 13 Dec 25 09:33 UTC │
	│ delete  │ -p no-preload-616969                                                                                                                                                                                                      │ no-preload-616969            │ jenkins │ v1.37.0 │ 13 Dec 25 09:33 UTC │ 13 Dec 25 09:33 UTC │
	│ start   │ -p auto-949855 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --driver=kvm2                                                                                                                               │ auto-949855                  │ jenkins │ v1.37.0 │ 13 Dec 25 09:33 UTC │                     │
	│ addons  │ enable metrics-server -p newest-cni-719997 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain                                                                                   │ newest-cni-719997            │ jenkins │ v1.37.0 │ 13 Dec 25 09:33 UTC │ 13 Dec 25 09:33 UTC │
	│ stop    │ -p newest-cni-719997 --alsologtostderr -v=3                                                                                                                                                                               │ newest-cni-719997            │ jenkins │ v1.37.0 │ 13 Dec 25 09:33 UTC │ 13 Dec 25 09:33 UTC │
	│ addons  │ enable dashboard -p newest-cni-719997 --images=MetricsScraper=registry.k8s.io/echoserver:1.4                                                                                                                              │ newest-cni-719997            │ jenkins │ v1.37.0 │ 13 Dec 25 09:33 UTC │ 13 Dec 25 09:33 UTC │
	│ start   │ -p newest-cni-719997 --memory=3072 --alsologtostderr --wait=apiserver,system_pods,default_sa --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=kvm2  --kubernetes-version=v1.35.0-beta.0 │ newest-cni-719997            │ jenkins │ v1.37.0 │ 13 Dec 25 09:33 UTC │ 13 Dec 25 09:34 UTC │
	│ image   │ embed-certs-594077 image list --format=json                                                                                                                                                                               │ embed-certs-594077           │ jenkins │ v1.37.0 │ 13 Dec 25 09:34 UTC │ 13 Dec 25 09:34 UTC │
	│ pause   │ -p embed-certs-594077 --alsologtostderr -v=1                                                                                                                                                                              │ embed-certs-594077           │ jenkins │ v1.37.0 │ 13 Dec 25 09:34 UTC │ 13 Dec 25 09:34 UTC │
	│ addons  │ enable metrics-server -p default-k8s-diff-port-018953 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain                                                                        │ default-k8s-diff-port-018953 │ jenkins │ v1.37.0 │ 13 Dec 25 09:34 UTC │ 13 Dec 25 09:34 UTC │
	│ stop    │ -p default-k8s-diff-port-018953 --alsologtostderr -v=3                                                                                                                                                                    │ default-k8s-diff-port-018953 │ jenkins │ v1.37.0 │ 13 Dec 25 09:34 UTC │ 13 Dec 25 09:34 UTC │
	│ image   │ newest-cni-719997 image list --format=json                                                                                                                                                                                │ newest-cni-719997            │ jenkins │ v1.37.0 │ 13 Dec 25 09:34 UTC │ 13 Dec 25 09:34 UTC │
	│ pause   │ -p newest-cni-719997 --alsologtostderr -v=1                                                                                                                                                                               │ newest-cni-719997            │ jenkins │ v1.37.0 │ 13 Dec 25 09:34 UTC │ 13 Dec 25 09:34 UTC │
	│ unpause │ -p newest-cni-719997 --alsologtostderr -v=1                                                                                                                                                                               │ newest-cni-719997            │ jenkins │ v1.37.0 │ 13 Dec 25 09:34 UTC │ 13 Dec 25 09:34 UTC │
	│ delete  │ -p newest-cni-719997                                                                                                                                                                                                      │ newest-cni-719997            │ jenkins │ v1.37.0 │ 13 Dec 25 09:34 UTC │ 13 Dec 25 09:34 UTC │
	│ delete  │ -p newest-cni-719997                                                                                                                                                                                                      │ newest-cni-719997            │ jenkins │ v1.37.0 │ 13 Dec 25 09:34 UTC │ 13 Dec 25 09:34 UTC │
	│ start   │ -p kindnet-949855 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=kindnet --driver=kvm2                                                                                                              │ kindnet-949855               │ jenkins │ v1.37.0 │ 13 Dec 25 09:34 UTC │                     │
	│ addons  │ enable dashboard -p default-k8s-diff-port-018953 --images=MetricsScraper=registry.k8s.io/echoserver:1.4                                                                                                                   │ default-k8s-diff-port-018953 │ jenkins │ v1.37.0 │ 13 Dec 25 09:34 UTC │ 13 Dec 25 09:34 UTC │
	│ start   │ -p default-k8s-diff-port-018953 --memory=3072 --alsologtostderr --wait=true --apiserver-port=8444 --driver=kvm2  --kubernetes-version=v1.34.2                                                                             │ default-k8s-diff-port-018953 │ jenkins │ v1.37.0 │ 13 Dec 25 09:34 UTC │                     │
	│ unpause │ -p embed-certs-594077 --alsologtostderr -v=1                                                                                                                                                                              │ embed-certs-594077           │ jenkins │ v1.37.0 │ 13 Dec 25 09:34 UTC │ 13 Dec 25 09:34 UTC │
	└─────────┴───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┴──────────────────────────────┴─────────┴─────────┴─────────────────────┴─────────────────────┘
	
	
	==> Last Start <==
	Log file created at: 2025/12/13 09:34:44
	Running on machine: ubuntu-20-agent-3
	Binary: Built with gc go1.25.5 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1213 09:34:44.133654   50144 out.go:360] Setting OutFile to fd 1 ...
	I1213 09:34:44.133909   50144 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1213 09:34:44.133917   50144 out.go:374] Setting ErrFile to fd 2...
	I1213 09:34:44.133921   50144 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1213 09:34:44.134131   50144 root.go:338] Updating PATH: /home/jenkins/minikube-integration/22128-9390/.minikube/bin
	I1213 09:34:44.134591   50144 out.go:368] Setting JSON to false
	I1213 09:34:44.135680   50144 start.go:133] hostinfo: {"hostname":"ubuntu-20-agent-3","uptime":4634,"bootTime":1765613850,"procs":201,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"22.04","kernelVersion":"6.8.0-1045-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I1213 09:34:44.135763   50144 start.go:143] virtualization: kvm guest
	I1213 09:34:44.137725   50144 out.go:179] * [default-k8s-diff-port-018953] minikube v1.37.0 on Ubuntu 22.04 (kvm/amd64)
	I1213 09:34:44.139291   50144 notify.go:221] Checking for updates...
	I1213 09:34:44.139324   50144 out.go:179]   - MINIKUBE_LOCATION=22128
	I1213 09:34:44.141030   50144 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1213 09:34:44.142532   50144 out.go:179]   - KUBECONFIG=/home/jenkins/minikube-integration/22128-9390/kubeconfig
	I1213 09:34:44.145292   50144 out.go:179]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/22128-9390/.minikube
	I1213 09:34:44.146816   50144 out.go:179]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I1213 09:34:44.148267   50144 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I1213 09:34:44.150282   50144 config.go:182] Loaded profile config "default-k8s-diff-port-018953": Driver=kvm2, ContainerRuntime=docker, KubernetesVersion=v1.34.2
	I1213 09:34:44.150781   50144 driver.go:422] Setting default libvirt URI to qemu:///system
	I1213 09:34:44.194033   50144 out.go:179] * Using the kvm2 driver based on existing profile
	I1213 09:34:44.195572   50144 start.go:309] selected driver: kvm2
	I1213 09:34:44.195598   50144 start.go:927] validating driver "kvm2" against &{Name:default-k8s-diff-port-018953 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/22101/minikube-v1.37.0-1765481609-22101-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765275396-22083@sha256:ffa93f7bad1d2c0a7acfa6e97f1eec0e4955680d08c3904e49db297a10f7f89f Memory:3072 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8444 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesCo
nfig:{KubernetesVersion:v1.34.2 ClusterName:default-k8s-diff-port-018953 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.50.59 Port:8444 KubernetesVersion:v1.34.2 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] Li
stenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1213 09:34:44.195711   50144 start.go:938] status for kvm2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1213 09:34:44.196775   50144 start_flags.go:992] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1213 09:34:44.196810   50144 cni.go:84] Creating CNI manager for ""
	I1213 09:34:44.196896   50144 cni.go:158] "kvm2" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I1213 09:34:44.196958   50144 start.go:353] cluster config:
	{Name:default-k8s-diff-port-018953 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/22101/minikube-v1.37.0-1765481609-22101-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765275396-22083@sha256:ffa93f7bad1d2c0a7acfa6e97f1eec0e4955680d08c3904e49db297a10f7f89f Memory:3072 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8444 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.2 ClusterName:default-k8s-diff-port-018953 Names
pace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.50.59 Port:8444 KubernetesVersion:v1.34.2 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpira
tion:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1213 09:34:44.197055   50144 iso.go:125] acquiring lock: {Name:mka70bc7358d71723b0212976cce8aaa1cb0bc58 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1213 09:34:44.198938   50144 out.go:179] * Starting "default-k8s-diff-port-018953" primary control-plane node in "default-k8s-diff-port-018953" cluster
	I1213 09:34:42.777697   49982 main.go:143] libmachine: domain kindnet-949855 has defined MAC address 52:54:00:35:93:4c in network mk-kindnet-949855
	I1213 09:34:42.778596   49982 main.go:143] libmachine: no network interface addresses found for domain kindnet-949855 (source=lease)
	I1213 09:34:42.778617   49982 main.go:143] libmachine: trying to list again with source=arp
	I1213 09:34:42.779063   49982 main.go:143] libmachine: unable to find current IP address of domain kindnet-949855 in network mk-kindnet-949855 (interfaces detected: [])
	I1213 09:34:42.779098   49982 retry.go:31] will retry after 1.16996515s: waiting for domain to come up
	I1213 09:34:43.950913   49982 main.go:143] libmachine: domain kindnet-949855 has defined MAC address 52:54:00:35:93:4c in network mk-kindnet-949855
	I1213 09:34:43.951731   49982 main.go:143] libmachine: no network interface addresses found for domain kindnet-949855 (source=lease)
	I1213 09:34:43.951754   49982 main.go:143] libmachine: trying to list again with source=arp
	I1213 09:34:43.952220   49982 main.go:143] libmachine: unable to find current IP address of domain kindnet-949855 in network mk-kindnet-949855 (interfaces detected: [])
	I1213 09:34:43.952273   49982 retry.go:31] will retry after 990.024449ms: waiting for domain to come up
	I1213 09:34:44.943737   49982 main.go:143] libmachine: domain kindnet-949855 has defined MAC address 52:54:00:35:93:4c in network mk-kindnet-949855
	I1213 09:34:44.944673   49982 main.go:143] libmachine: no network interface addresses found for domain kindnet-949855 (source=lease)
	I1213 09:34:44.944698   49982 main.go:143] libmachine: trying to list again with source=arp
	I1213 09:34:44.945220   49982 main.go:143] libmachine: unable to find current IP address of domain kindnet-949855 in network mk-kindnet-949855 (interfaces detected: [])
	I1213 09:34:44.945259   49982 retry.go:31] will retry after 1.213110356s: waiting for domain to come up
	I1213 09:34:46.159702   49982 main.go:143] libmachine: domain kindnet-949855 has defined MAC address 52:54:00:35:93:4c in network mk-kindnet-949855
	I1213 09:34:46.160662   49982 main.go:143] libmachine: no network interface addresses found for domain kindnet-949855 (source=lease)
	I1213 09:34:46.160685   49982 main.go:143] libmachine: trying to list again with source=arp
	I1213 09:34:46.161142   49982 main.go:143] libmachine: unable to find current IP address of domain kindnet-949855 in network mk-kindnet-949855 (interfaces detected: [])
	I1213 09:34:46.161190   49982 retry.go:31] will retry after 2.219294638s: waiting for domain to come up
	W1213 09:34:45.255022   48864 pod_ready.go:104] pod "coredns-66bc5c9577-chjjw" is not "Ready", error: <nil>
	W1213 09:34:47.754969   48864 pod_ready.go:104] pod "coredns-66bc5c9577-chjjw" is not "Ready", error: <nil>
	I1213 09:34:44.200532   50144 preload.go:188] Checking if preload exists for k8s version v1.34.2 and runtime docker
	I1213 09:34:44.200573   50144 preload.go:203] Found local preload: /home/jenkins/minikube-integration/22128-9390/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.2-docker-overlay2-amd64.tar.lz4
	I1213 09:34:44.200590   50144 cache.go:65] Caching tarball of preloaded images
	I1213 09:34:44.200687   50144 preload.go:238] Found /home/jenkins/minikube-integration/22128-9390/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.2-docker-overlay2-amd64.tar.lz4 in cache, skipping download
	I1213 09:34:44.200700   50144 cache.go:68] Finished verifying existence of preloaded tar for v1.34.2 on docker
	I1213 09:34:44.200800   50144 profile.go:143] Saving config to /home/jenkins/minikube-integration/22128-9390/.minikube/profiles/default-k8s-diff-port-018953/config.json ...
	I1213 09:34:44.201085   50144 start.go:360] acquireMachinesLock for default-k8s-diff-port-018953: {Name:mk5011dd8641588b44f3b8805193aca1c9f0973f Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I1213 09:34:48.382833   49982 main.go:143] libmachine: domain kindnet-949855 has defined MAC address 52:54:00:35:93:4c in network mk-kindnet-949855
	I1213 09:34:48.383965   49982 main.go:143] libmachine: no network interface addresses found for domain kindnet-949855 (source=lease)
	I1213 09:34:48.383989   49982 main.go:143] libmachine: trying to list again with source=arp
	I1213 09:34:48.384580   49982 main.go:143] libmachine: unable to find current IP address of domain kindnet-949855 in network mk-kindnet-949855 (interfaces detected: [])
	I1213 09:34:48.384618   49982 retry.go:31] will retry after 2.900119926s: waiting for domain to come up
	I1213 09:34:51.288687   49982 main.go:143] libmachine: domain kindnet-949855 has defined MAC address 52:54:00:35:93:4c in network mk-kindnet-949855
	I1213 09:34:51.290269   49982 main.go:143] libmachine: no network interface addresses found for domain kindnet-949855 (source=lease)
	I1213 09:34:51.290294   49982 main.go:143] libmachine: trying to list again with source=arp
	I1213 09:34:51.290800   49982 main.go:143] libmachine: unable to find current IP address of domain kindnet-949855 in network mk-kindnet-949855 (interfaces detected: [])
	I1213 09:34:51.290844   49982 retry.go:31] will retry after 2.549669485s: waiting for domain to come up
	W1213 09:34:50.253513   48864 pod_ready.go:104] pod "coredns-66bc5c9577-chjjw" is not "Ready", error: <nil>
	W1213 09:34:52.255803   48864 pod_ready.go:104] pod "coredns-66bc5c9577-chjjw" is not "Ready", error: <nil>
	
	
	==> Docker <==
	Dec 13 09:33:54 embed-certs-594077 dockerd[1179]: time="2025-12-13T09:33:54.656562567Z" level=warning msg="Error persisting manifest" digest="sha256:5d99aa1120524c801bc8c1a7077e8f5ec122ba16b6dda1a5d3826057f67b9bcb" error="error committing manifest to content store: commit failed: unexpected commit digest sha256:eaee4c452b076cdb05b391ed7e75e1ad0aca136665875ab5d7e2f3d9f4675769, expected sha256:5d99aa1120524c801bc8c1a7077e8f5ec122ba16b6dda1a5d3826057f67b9bcb: failed precondition" remote="registry.k8s.io/echoserver:1.4"
	Dec 13 09:33:54 embed-certs-594077 dockerd[1179]: time="2025-12-13T09:33:54.656735001Z" level=info msg="Attempting next endpoint for pull after error: Docker Image Format v1 and Docker Image manifest version 2, schema 1 support has been removed. Suggest the author of registry.k8s.io/echoserver:1.4 to upgrade the image to the OCI Format or Docker Image manifest v2, schema 2. More information at https://docs.docker.com/go/deprecated-image-specs/"
	Dec 13 09:33:54 embed-certs-594077 cri-dockerd[1561]: time="2025-12-13T09:33:54Z" level=info msg="Stop pulling image registry.k8s.io/echoserver:1.4: 1.4: Pulling from echoserver"
	Dec 13 09:33:54 embed-certs-594077 dockerd[1179]: time="2025-12-13T09:33:54.873702256Z" level=warning msg="reference for unknown type: " digest="sha256:2e500d29e9d5f4a086b908eb8dfe7ecac57d2ab09d65b24f588b1d449841ef93" remote="docker.io/kubernetesui/dashboard@sha256:2e500d29e9d5f4a086b908eb8dfe7ecac57d2ab09d65b24f588b1d449841ef93"
	Dec 13 09:34:03 embed-certs-594077 cri-dockerd[1561]: time="2025-12-13T09:34:03Z" level=info msg="Stop pulling image docker.io/kubernetesui/dashboard:v2.7.0@sha256:2e500d29e9d5f4a086b908eb8dfe7ecac57d2ab09d65b24f588b1d449841ef93: Status: Downloaded newer image for kubernetesui/dashboard@sha256:2e500d29e9d5f4a086b908eb8dfe7ecac57d2ab09d65b24f588b1d449841ef93"
	Dec 13 09:34:09 embed-certs-594077 dockerd[1179]: time="2025-12-13T09:34:09.467828020Z" level=warning msg="reference for unknown type: application/vnd.docker.distribution.manifest.v1+prettyjws" digest="sha256:5d99aa1120524c801bc8c1a7077e8f5ec122ba16b6dda1a5d3826057f67b9bcb" remote="registry.k8s.io/echoserver:1.4"
	Dec 13 09:34:09 embed-certs-594077 dockerd[1179]: time="2025-12-13T09:34:09.540651785Z" level=warning msg="Error persisting manifest" digest="sha256:5d99aa1120524c801bc8c1a7077e8f5ec122ba16b6dda1a5d3826057f67b9bcb" error="error committing manifest to content store: commit failed: unexpected commit digest sha256:eaee4c452b076cdb05b391ed7e75e1ad0aca136665875ab5d7e2f3d9f4675769, expected sha256:5d99aa1120524c801bc8c1a7077e8f5ec122ba16b6dda1a5d3826057f67b9bcb: failed precondition" remote="registry.k8s.io/echoserver:1.4"
	Dec 13 09:34:09 embed-certs-594077 dockerd[1179]: time="2025-12-13T09:34:09.542156265Z" level=info msg="Attempting next endpoint for pull after error: Docker Image Format v1 and Docker Image manifest version 2, schema 1 support has been removed. Suggest the author of registry.k8s.io/echoserver:1.4 to upgrade the image to the OCI Format or Docker Image manifest v2, schema 2. More information at https://docs.docker.com/go/deprecated-image-specs/"
	Dec 13 09:34:09 embed-certs-594077 cri-dockerd[1561]: time="2025-12-13T09:34:09Z" level=info msg="Stop pulling image registry.k8s.io/echoserver:1.4: 1.4: Pulling from echoserver"
	Dec 13 09:34:09 embed-certs-594077 dockerd[1179]: time="2025-12-13T09:34:09.567348185Z" level=warning msg="Error getting v2 registry: Get \"https://fake.domain/v2/\": dial tcp: lookup fake.domain on 192.168.122.1:53: no such host"
	Dec 13 09:34:09 embed-certs-594077 dockerd[1179]: time="2025-12-13T09:34:09.567521379Z" level=info msg="Attempting next endpoint for pull after error: Get \"https://fake.domain/v2/\": dial tcp: lookup fake.domain on 192.168.122.1:53: no such host"
	Dec 13 09:34:09 embed-certs-594077 dockerd[1179]: time="2025-12-13T09:34:09.575597440Z" level=error msg="unexpected HTTP error handling" error="<nil>"
	Dec 13 09:34:09 embed-certs-594077 dockerd[1179]: time="2025-12-13T09:34:09.575676593Z" level=error msg="Handler for POST /v1.46/images/create returned error: Get \"https://fake.domain/v2/\": dial tcp: lookup fake.domain on 192.168.122.1:53: no such host"
	Dec 13 09:34:17 embed-certs-594077 dockerd[1179]: time="2025-12-13T09:34:17.140741400Z" level=error msg="Handler for POST /v1.51/containers/de05857e10ed/pause returned error: cannot pause container de05857e10ed14d338591b8d140c8fdbffcc13e5cdf3dc4d04b3f6eabfd47af5: OCI runtime pause failed: container not running"
	Dec 13 09:34:17 embed-certs-594077 dockerd[1179]: time="2025-12-13T09:34:17.193091447Z" level=info msg="ignoring event" container=de05857e10ed14d338591b8d140c8fdbffcc13e5cdf3dc4d04b3f6eabfd47af5 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Dec 13 09:34:50 embed-certs-594077 cri-dockerd[1561]: time="2025-12-13T09:34:50Z" level=info msg="Failed to read pod IP from plugin/docker: networkPlugin cni failed on the status hook for pod \"coredns-66bc5c9577-pg6d8_kube-system\": CNI failed to retrieve network namespace path: cannot find network namespace for the terminated container \"88f1a58b376611f492c5b508834009cd114167f31ab62ec3d85fc7744f5c10b4\""
	Dec 13 09:34:51 embed-certs-594077 cri-dockerd[1561]: time="2025-12-13T09:34:51Z" level=info msg="Docker cri received runtime config &RuntimeConfig{NetworkConfig:&NetworkConfig{PodCidr:10.244.0.0/24,},}"
	Dec 13 09:34:52 embed-certs-594077 dockerd[1179]: time="2025-12-13T09:34:52.000059166Z" level=warning msg="reference for unknown type: application/vnd.docker.distribution.manifest.v1+prettyjws" digest="sha256:5d99aa1120524c801bc8c1a7077e8f5ec122ba16b6dda1a5d3826057f67b9bcb" remote="registry.k8s.io/echoserver:1.4"
	Dec 13 09:34:52 embed-certs-594077 dockerd[1179]: time="2025-12-13T09:34:52.107124399Z" level=warning msg="Error persisting manifest" digest="sha256:5d99aa1120524c801bc8c1a7077e8f5ec122ba16b6dda1a5d3826057f67b9bcb" error="error committing manifest to content store: commit failed: unexpected commit digest sha256:eaee4c452b076cdb05b391ed7e75e1ad0aca136665875ab5d7e2f3d9f4675769, expected sha256:5d99aa1120524c801bc8c1a7077e8f5ec122ba16b6dda1a5d3826057f67b9bcb: failed precondition" remote="registry.k8s.io/echoserver:1.4"
	Dec 13 09:34:52 embed-certs-594077 dockerd[1179]: time="2025-12-13T09:34:52.107248417Z" level=info msg="Attempting next endpoint for pull after error: Docker Image Format v1 and Docker Image manifest version 2, schema 1 support has been removed. Suggest the author of registry.k8s.io/echoserver:1.4 to upgrade the image to the OCI Format or Docker Image manifest v2, schema 2. More information at https://docs.docker.com/go/deprecated-image-specs/"
	Dec 13 09:34:52 embed-certs-594077 cri-dockerd[1561]: time="2025-12-13T09:34:52Z" level=info msg="Stop pulling image registry.k8s.io/echoserver:1.4: 1.4: Pulling from echoserver"
	Dec 13 09:34:52 embed-certs-594077 dockerd[1179]: time="2025-12-13T09:34:52.140813005Z" level=warning msg="Error getting v2 registry: Get \"https://fake.domain/v2/\": dial tcp: lookup fake.domain on 192.168.122.1:53: no such host"
	Dec 13 09:34:52 embed-certs-594077 dockerd[1179]: time="2025-12-13T09:34:52.140880277Z" level=info msg="Attempting next endpoint for pull after error: Get \"https://fake.domain/v2/\": dial tcp: lookup fake.domain on 192.168.122.1:53: no such host"
	Dec 13 09:34:52 embed-certs-594077 dockerd[1179]: time="2025-12-13T09:34:52.152224216Z" level=error msg="unexpected HTTP error handling" error="<nil>"
	Dec 13 09:34:52 embed-certs-594077 dockerd[1179]: time="2025-12-13T09:34:52.152379674Z" level=error msg="Handler for POST /v1.46/images/create returned error: Get \"https://fake.domain/v2/\": dial tcp: lookup fake.domain on 192.168.122.1:53: no such host"
	
	
	==> container status <==
	CONTAINER           IMAGE                                                                                                 CREATED              STATE               NAME                      ATTEMPT             POD ID              POD                                          NAMESPACE
	2f643a44c0947       6e38f40d628db                                                                                         3 seconds ago        Running             storage-provisioner       2                   f0fe97ebd2fa8       storage-provisioner                          kube-system
	9d744ae1656d5       kubernetesui/dashboard@sha256:2e500d29e9d5f4a086b908eb8dfe7ecac57d2ab09d65b24f588b1d449841ef93        51 seconds ago       Running             kubernetes-dashboard      0                   655aad46b16e4       kubernetes-dashboard-855c9754f9-5ckvx        kubernetes-dashboard
	e383d4e28bee5       56cc512116c8f                                                                                         About a minute ago   Running             busybox                   1                   88fedb324336f       busybox                                      default
	3500352ae1887       52546a367cc9e                                                                                         About a minute ago   Running             coredns                   1                   02321cceca25c       coredns-66bc5c9577-sbl6b                     kube-system
	de05857e10ed1       6e38f40d628db                                                                                         About a minute ago   Exited              storage-provisioner       1                   f0fe97ebd2fa8       storage-provisioner                          kube-system
	3b9abac9a0e5e       8aa150647e88a                                                                                         About a minute ago   Running             kube-proxy                1                   0185479b8f1ac       kube-proxy-gbh4v                             kube-system
	652f8878d5fe5       a3e246e9556e9                                                                                         About a minute ago   Running             etcd                      1                   06277dacc9521       etcd-embed-certs-594077                      kube-system
	8cac4fb329021       88320b5498ff2                                                                                         About a minute ago   Running             kube-scheduler            1                   a72c06cffcc53       kube-scheduler-embed-certs-594077            kube-system
	bcf2fd0416777       01e8bacf0f500                                                                                         About a minute ago   Running             kube-controller-manager   1                   064f32bea94a2       kube-controller-manager-embed-certs-594077   kube-system
	ea6f4d67228a1       a5f569d49a979                                                                                         About a minute ago   Running             kube-apiserver            1                   d5b4d42f70f7a       kube-apiserver-embed-certs-594077            kube-system
	ceb2c2191e490       gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e   2 minutes ago        Exited              busybox                   0                   1082bf842642a       busybox                                      default
	a2c91c9fb48e6       52546a367cc9e                                                                                         2 minutes ago        Exited              coredns                   0                   299749fc58f7b       coredns-66bc5c9577-sbl6b                     kube-system
	08fadc68f466b       8aa150647e88a                                                                                         2 minutes ago        Exited              kube-proxy                0                   acc0a3cff3053       kube-proxy-gbh4v                             kube-system
	6e6c8e89a43c7       a5f569d49a979                                                                                         3 minutes ago        Exited              kube-apiserver            0                   3f64649de4057       kube-apiserver-embed-certs-594077            kube-system
	d6604faaddf3f       a3e246e9556e9                                                                                         3 minutes ago        Exited              etcd                      0                   45afc8f5a4c50       etcd-embed-certs-594077                      kube-system
	4b2a5a8f531e3       01e8bacf0f500                                                                                         3 minutes ago        Exited              kube-controller-manager   0                   b2be7e1ac613b       kube-controller-manager-embed-certs-594077   kube-system
	cf9e0b0dcbf9b       88320b5498ff2                                                                                         3 minutes ago        Exited              kube-scheduler            0                   356edcdb1aadc       kube-scheduler-embed-certs-594077            kube-system
	
	
	==> coredns [3500352ae188] <==
	maxprocs: Leaving GOMAXPROCS=2: CPU quota undefined
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = 680cec097987c24242735352e9de77b2ba657caea131666c4002607b6f81fb6322fe6fa5c2d434be3fcd1251845cd6b7641e3a08a7d3b88486730de31a010646
	CoreDNS-1.12.1
	linux/amd64, go1.24.1, 707c7c1
	[INFO] 127.0.0.1:52332 - 31954 "HINFO IN 7552130428793522761.6479760196523847134. udp 57 false 512" NXDOMAIN qr,rd,ra 132 0.112161133s
	
	
	==> coredns [a2c91c9fb48e] <==
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[WARNING] plugin/kubernetes: starting server with unsynced Kubernetes API
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = 1b226df79860026c6a52e67daa10d7f0d57ec5b023288ec00c5e05f93523c894564e15b91770d3a07ae1cfbe861d15b37d4a0027e69c546ab112970993a3b03b
	CoreDNS-1.12.1
	linux/amd64, go1.24.1, 707c7c1
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.32.3/tools/cache/reflector.go:251: failed to list *v1.Namespace: Get "https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: i/o timeout
	[ERROR] plugin/kubernetes: Unhandled Error
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.32.3/tools/cache/reflector.go:251: failed to list *v1.EndpointSlice: Get "https://10.96.0.1:443/apis/discovery.k8s.io/v1/endpointslices?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: i/o timeout
	[ERROR] plugin/kubernetes: Unhandled Error
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.32.3/tools/cache/reflector.go:251: failed to list *v1.Service: Get "https://10.96.0.1:443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: i/o timeout
	[ERROR] plugin/kubernetes: Unhandled Error
	[INFO] Reloading
	[INFO] plugin/reload: Running configuration SHA512 = 680cec097987c24242735352e9de77b2ba657caea131666c4002607b6f81fb6322fe6fa5c2d434be3fcd1251845cd6b7641e3a08a7d3b88486730de31a010646
	[INFO] Reloading complete
	[INFO] 127.0.0.1:36208 - 41315 "HINFO IN 1358106524289017339.4675404298798629450. udp 57 false 512" NXDOMAIN qr,rd,ra 132 0.043234961s
	[INFO] SIGTERM: Shutting down servers then terminating
	[INFO] plugin/health: Going into lameduck mode for 5s
	
	
	==> describe nodes <==
	Name:               embed-certs-594077
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=embed-certs-594077
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=fb16b7642350f383695d44d1e88d7327f6f14453
	                    minikube.k8s.io/name=embed-certs-594077
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2025_12_13T09_31_49_0700
	                    minikube.k8s.io/version=v1.37.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Sat, 13 Dec 2025 09:31:45 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  embed-certs-594077
	  AcquireTime:     <unset>
	  RenewTime:       Sat, 13 Dec 2025 09:34:50 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Sat, 13 Dec 2025 09:34:51 +0000   Sat, 13 Dec 2025 09:31:43 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Sat, 13 Dec 2025 09:34:51 +0000   Sat, 13 Dec 2025 09:31:43 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Sat, 13 Dec 2025 09:34:51 +0000   Sat, 13 Dec 2025 09:31:43 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Sat, 13 Dec 2025 09:34:51 +0000   Sat, 13 Dec 2025 09:33:49 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.39.5
	  Hostname:    embed-certs-594077
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             3035908Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             3035908Ki
	  pods:               110
	System Info:
	  Machine ID:                 3f9ed15ee5214a3682f9a8b37f59f7e2
	  System UUID:                3f9ed15e-e521-4a36-82f9-a8b37f59f7e2
	  Boot ID:                    5905dae6-5187-479b-bc88-9a3ad2e0e23b
	  Kernel Version:             6.6.95
	  OS Image:                   Buildroot 2025.02
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  docker://28.5.2
	  Kubelet Version:            v1.34.2
	  Kube-Proxy Version:         
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (11 in total)
	  Namespace                   Name                                          CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                          ------------  ----------  ---------------  -------------  ---
	  default                     busybox                                       0 (0%)        0 (0%)      0 (0%)           0 (0%)         2m18s
	  kube-system                 coredns-66bc5c9577-sbl6b                      100m (5%)     0 (0%)      70Mi (2%)        170Mi (5%)     3m
	  kube-system                 etcd-embed-certs-594077                       100m (5%)     0 (0%)      100Mi (3%)       0 (0%)         3m6s
	  kube-system                 kube-apiserver-embed-certs-594077             250m (12%)    0 (0%)      0 (0%)           0 (0%)         3m6s
	  kube-system                 kube-controller-manager-embed-certs-594077    200m (10%)    0 (0%)      0 (0%)           0 (0%)         3m6s
	  kube-system                 kube-proxy-gbh4v                              0 (0%)        0 (0%)      0 (0%)           0 (0%)         3m
	  kube-system                 kube-scheduler-embed-certs-594077             100m (5%)     0 (0%)      0 (0%)           0 (0%)         3m6s
	  kube-system                 metrics-server-746fcd58dc-r9qzb               100m (5%)     0 (0%)      200Mi (6%)       0 (0%)         2m7s
	  kube-system                 storage-provisioner                           0 (0%)        0 (0%)      0 (0%)           0 (0%)         2m56s
	  kubernetes-dashboard        dashboard-metrics-scraper-6ffb444bf9-42zcv    0 (0%)        0 (0%)      0 (0%)           0 (0%)         63s
	  kubernetes-dashboard        kubernetes-dashboard-855c9754f9-5ckvx         0 (0%)        0 (0%)      0 (0%)           0 (0%)         63s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests     Limits
	  --------           --------     ------
	  cpu                850m (42%)   0 (0%)
	  memory             370Mi (12%)  170Mi (5%)
	  ephemeral-storage  0 (0%)       0 (0%)
	  hugepages-2Mi      0 (0%)       0 (0%)
	Events:
	  Type     Reason                   Age                    From             Message
	  ----     ------                   ----                   ----             -------
	  Normal   Starting                 2m57s                  kube-proxy       
	  Normal   Starting                 67s                    kube-proxy       
	  Normal   Starting                 3m14s                  kubelet          Starting kubelet.
	  Normal   NodeHasSufficientMemory  3m14s (x8 over 3m14s)  kubelet          Node embed-certs-594077 status is now: NodeHasSufficientMemory
	  Normal   NodeHasNoDiskPressure    3m14s (x8 over 3m14s)  kubelet          Node embed-certs-594077 status is now: NodeHasNoDiskPressure
	  Normal   NodeHasSufficientPID     3m14s (x7 over 3m14s)  kubelet          Node embed-certs-594077 status is now: NodeHasSufficientPID
	  Normal   NodeAllocatableEnforced  3m14s                  kubelet          Updated Node Allocatable limit across pods
	  Normal   Starting                 3m6s                   kubelet          Starting kubelet.
	  Normal   NodeAllocatableEnforced  3m6s                   kubelet          Updated Node Allocatable limit across pods
	  Normal   NodeHasSufficientMemory  3m6s                   kubelet          Node embed-certs-594077 status is now: NodeHasSufficientMemory
	  Normal   NodeHasNoDiskPressure    3m6s                   kubelet          Node embed-certs-594077 status is now: NodeHasNoDiskPressure
	  Normal   NodeHasSufficientPID     3m6s                   kubelet          Node embed-certs-594077 status is now: NodeHasSufficientPID
	  Normal   NodeReady                3m2s                   kubelet          Node embed-certs-594077 status is now: NodeReady
	  Normal   RegisteredNode           3m1s                   node-controller  Node embed-certs-594077 event: Registered Node embed-certs-594077 in Controller
	  Normal   Starting                 76s                    kubelet          Starting kubelet.
	  Normal   NodeHasSufficientMemory  76s (x8 over 76s)      kubelet          Node embed-certs-594077 status is now: NodeHasSufficientMemory
	  Normal   NodeHasNoDiskPressure    76s (x8 over 76s)      kubelet          Node embed-certs-594077 status is now: NodeHasNoDiskPressure
	  Normal   NodeHasSufficientPID     76s (x7 over 76s)      kubelet          Node embed-certs-594077 status is now: NodeHasSufficientPID
	  Normal   NodeAllocatableEnforced  76s                    kubelet          Updated Node Allocatable limit across pods
	  Warning  Rebooted                 70s                    kubelet          Node embed-certs-594077 has been rebooted, boot id: 5905dae6-5187-479b-bc88-9a3ad2e0e23b
	  Normal   RegisteredNode           64s                    node-controller  Node embed-certs-594077 event: Registered Node embed-certs-594077 in Controller
	  Normal   Starting                 4s                     kubelet          Starting kubelet.
	  Normal   NodeAllocatableEnforced  4s                     kubelet          Updated Node Allocatable limit across pods
	  Normal   NodeHasSufficientMemory  3s                     kubelet          Node embed-certs-594077 status is now: NodeHasSufficientMemory
	  Normal   NodeHasNoDiskPressure    3s                     kubelet          Node embed-certs-594077 status is now: NodeHasNoDiskPressure
	  Normal   NodeHasSufficientPID     3s                     kubelet          Node embed-certs-594077 status is now: NodeHasSufficientPID
	
	
	==> dmesg <==
	[Dec13 09:33] Booted with the nomodeset parameter. Only the system framebuffer will be available
	[  +0.000010] acpi PNP0A03:00: fail to add MMCONFIG information, can't access extended configuration space under this bridge
	[  +0.001642] platform regulatory.0: Direct firmware load for regulatory.db failed with error -2
	[  +0.004177] (rpcbind)[120]: rpcbind.service: Referenced but unset environment variable evaluates to an empty string: RPCBIND_OPTIONS
	[  +0.886210] NFSD: Using /var/lib/nfs/v4recovery as the NFSv4 state recovery directory
	[  +0.000026] NFSD: unable to find recovery directory /var/lib/nfs/v4recovery
	[  +0.000001] NFSD: Unable to initialize client recovery tracking! (-2)
	[  +0.119341] kauditd_printk_skb: 1 callbacks suppressed
	[  +0.139009] kauditd_printk_skb: 421 callbacks suppressed
	[  +8.056256] kauditd_printk_skb: 193 callbacks suppressed
	[  +2.474062] kauditd_printk_skb: 128 callbacks suppressed
	[  +0.838025] kauditd_printk_skb: 259 callbacks suppressed
	[Dec13 09:34] kauditd_printk_skb: 2 callbacks suppressed
	[  +0.277121] kauditd_printk_skb: 11 callbacks suppressed
	[  +0.213014] kauditd_printk_skb: 35 callbacks suppressed
	
	
	==> etcd [652f8878d5fe] <==
	{"level":"warn","ts":"2025-12-13T09:33:42.931665Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:53924","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-13T09:33:42.982941Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:53938","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-13T09:33:43.000194Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:53944","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-13T09:33:43.029221Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:53970","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-13T09:33:43.044582Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:53974","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-13T09:33:43.070524Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:53996","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-13T09:33:43.114197Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:54022","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-13T09:33:43.160976Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:54040","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-13T09:33:43.185685Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:54060","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-13T09:33:43.223885Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:54098","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-13T09:33:43.236727Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:54120","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-13T09:33:43.245766Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:54082","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-13T09:33:43.265875Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:54136","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-13T09:33:43.278357Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:54154","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-13T09:33:43.291236Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:54174","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-13T09:33:43.309255Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:54196","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-13T09:33:43.331705Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:54224","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-13T09:33:43.390604Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:54242","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-13T09:33:43.413099Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:54254","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-13T09:33:43.484942Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:54256","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-13T09:33:43.540577Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:54264","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-13T09:33:43.562762Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:54284","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-13T09:33:43.586381Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:54300","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-13T09:33:43.605698Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:54320","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-13T09:33:43.726718Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:54340","server-name":"","error":"EOF"}
	
	
	==> etcd [d6604faaddf3] <==
	{"level":"warn","ts":"2025-12-13T09:31:43.869789Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:51370","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-13T09:31:43.901134Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:51388","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-13T09:31:43.939127Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:51400","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-13T09:31:43.963838Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:51414","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-13T09:31:43.987753Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:51430","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-13T09:31:44.019434Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:51448","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-13T09:31:44.214190Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:51468","server-name":"","error":"EOF"}
	{"level":"info","ts":"2025-12-13T09:32:49.290228Z","caller":"osutil/interrupt_unix.go:65","msg":"received signal; shutting down","signal":"terminated"}
	{"level":"info","ts":"2025-12-13T09:32:49.290316Z","caller":"embed/etcd.go:426","msg":"closing etcd server","name":"embed-certs-594077","data-dir":"/var/lib/minikube/etcd","advertise-peer-urls":["https://192.168.39.5:2380"],"advertise-client-urls":["https://192.168.39.5:2379"]}
	{"level":"error","ts":"2025-12-13T09:32:49.290413Z","caller":"embed/etcd.go:912","msg":"setting up serving from embedded etcd failed.","error":"http: Server closed","stacktrace":"go.etcd.io/etcd/server/v3/embed.(*Etcd).errHandler\n\tgo.etcd.io/etcd/server/v3/embed/etcd.go:912\ngo.etcd.io/etcd/server/v3/embed.(*serveCtx).startHandler.func1\n\tgo.etcd.io/etcd/server/v3/embed/serve.go:90"}
	{"level":"error","ts":"2025-12-13T09:32:56.297824Z","caller":"embed/etcd.go:912","msg":"setting up serving from embedded etcd failed.","error":"http: Server closed","stacktrace":"go.etcd.io/etcd/server/v3/embed.(*Etcd).errHandler\n\tgo.etcd.io/etcd/server/v3/embed/etcd.go:912\ngo.etcd.io/etcd/server/v3/embed.(*serveCtx).startHandler.func1\n\tgo.etcd.io/etcd/server/v3/embed/serve.go:90"}
	{"level":"error","ts":"2025-12-13T09:32:56.297931Z","caller":"embed/etcd.go:912","msg":"setting up serving from embedded etcd failed.","error":"accept tcp 127.0.0.1:2381: use of closed network connection","stacktrace":"go.etcd.io/etcd/server/v3/embed.(*Etcd).errHandler\n\tgo.etcd.io/etcd/server/v3/embed/etcd.go:912\ngo.etcd.io/etcd/server/v3/embed.(*Etcd).startHandler.func1\n\tgo.etcd.io/etcd/server/v3/embed/etcd.go:906"}
	{"level":"info","ts":"2025-12-13T09:32:56.297953Z","caller":"etcdserver/server.go:1297","msg":"skipped leadership transfer for single voting member cluster","local-member-id":"c5263387c79c0223","current-leader-member-id":"c5263387c79c0223"}
	{"level":"info","ts":"2025-12-13T09:32:56.298053Z","caller":"etcdserver/server.go:2358","msg":"server has stopped; stopping storage version's monitor"}
	{"level":"info","ts":"2025-12-13T09:32:56.298064Z","caller":"etcdserver/server.go:2335","msg":"server has stopped; stopping cluster version's monitor"}
	{"level":"warn","ts":"2025-12-13T09:32:56.298487Z","caller":"embed/serve.go:245","msg":"stopping secure grpc server due to error","error":"accept tcp 192.168.39.5:2379: use of closed network connection"}
	{"level":"warn","ts":"2025-12-13T09:32:56.298533Z","caller":"embed/serve.go:247","msg":"stopped secure grpc server due to error","error":"accept tcp 192.168.39.5:2379: use of closed network connection"}
	{"level":"error","ts":"2025-12-13T09:32:56.298541Z","caller":"embed/etcd.go:912","msg":"setting up serving from embedded etcd failed.","error":"accept tcp 192.168.39.5:2379: use of closed network connection","stacktrace":"go.etcd.io/etcd/server/v3/embed.(*Etcd).errHandler\n\tgo.etcd.io/etcd/server/v3/embed/etcd.go:912\ngo.etcd.io/etcd/server/v3/embed.(*Etcd).startHandler.func1\n\tgo.etcd.io/etcd/server/v3/embed/etcd.go:906"}
	{"level":"warn","ts":"2025-12-13T09:32:56.300679Z","caller":"embed/serve.go:245","msg":"stopping secure grpc server due to error","error":"accept tcp 127.0.0.1:2379: use of closed network connection"}
	{"level":"warn","ts":"2025-12-13T09:32:56.300980Z","caller":"embed/serve.go:247","msg":"stopped secure grpc server due to error","error":"accept tcp 127.0.0.1:2379: use of closed network connection"}
	{"level":"error","ts":"2025-12-13T09:32:56.301169Z","caller":"embed/etcd.go:912","msg":"setting up serving from embedded etcd failed.","error":"accept tcp 127.0.0.1:2379: use of closed network connection","stacktrace":"go.etcd.io/etcd/server/v3/embed.(*Etcd).errHandler\n\tgo.etcd.io/etcd/server/v3/embed/etcd.go:912\ngo.etcd.io/etcd/server/v3/embed.(*Etcd).startHandler.func1\n\tgo.etcd.io/etcd/server/v3/embed/etcd.go:906"}
	{"level":"info","ts":"2025-12-13T09:32:56.471124Z","caller":"embed/etcd.go:621","msg":"stopping serving peer traffic","address":"192.168.39.5:2380"}
	{"level":"error","ts":"2025-12-13T09:32:56.471216Z","caller":"embed/etcd.go:912","msg":"setting up serving from embedded etcd failed.","error":"accept tcp 192.168.39.5:2380: use of closed network connection","stacktrace":"go.etcd.io/etcd/server/v3/embed.(*Etcd).errHandler\n\tgo.etcd.io/etcd/server/v3/embed/etcd.go:912\ngo.etcd.io/etcd/server/v3/embed.(*Etcd).startHandler.func1\n\tgo.etcd.io/etcd/server/v3/embed/etcd.go:906"}
	{"level":"info","ts":"2025-12-13T09:32:56.471279Z","caller":"embed/etcd.go:626","msg":"stopped serving peer traffic","address":"192.168.39.5:2380"}
	{"level":"info","ts":"2025-12-13T09:32:56.471291Z","caller":"embed/etcd.go:428","msg":"closed etcd server","name":"embed-certs-594077","data-dir":"/var/lib/minikube/etcd","advertise-peer-urls":["https://192.168.39.5:2380"],"advertise-client-urls":["https://192.168.39.5:2379"]}
	
	
	==> kernel <==
	 09:34:54 up 1 min,  0 users,  load average: 1.24, 0.63, 0.24
	Linux embed-certs-594077 6.6.95 #1 SMP PREEMPT_DYNAMIC Thu Dec 11 23:11:39 UTC 2025 x86_64 GNU/Linux
	PRETTY_NAME="Buildroot 2025.02"
	
	
	==> kube-apiserver [6e6c8e89a43c] <==
	W1213 09:32:58.579780       1 logging.go:55] [core] [Channel #179 SubChannel #181]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1213 09:32:58.586497       1 logging.go:55] [core] [Channel #251 SubChannel #253]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1213 09:32:58.612833       1 logging.go:55] [core] [Channel #155 SubChannel #157]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1213 09:32:58.628739       1 logging.go:55] [core] [Channel #59 SubChannel #61]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1213 09:32:58.634886       1 logging.go:55] [core] [Channel #131 SubChannel #133]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1213 09:32:58.673263       1 logging.go:55] [core] [Channel #235 SubChannel #237]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1213 09:32:58.687212       1 logging.go:55] [core] [Channel #231 SubChannel #233]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1213 09:32:58.757751       1 logging.go:55] [core] [Channel #175 SubChannel #177]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1213 09:32:58.783556       1 logging.go:55] [core] [Channel #119 SubChannel #121]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1213 09:32:58.793183       1 logging.go:55] [core] [Channel #95 SubChannel #97]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1213 09:32:58.801878       1 logging.go:55] [core] [Channel #75 SubChannel #77]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1213 09:32:58.893696       1 logging.go:55] [core] [Channel #103 SubChannel #105]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1213 09:32:58.903550       1 logging.go:55] [core] [Channel #195 SubChannel #197]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1213 09:32:58.951767       1 logging.go:55] [core] [Channel #67 SubChannel #69]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1213 09:32:59.041688       1 logging.go:55] [core] [Channel #83 SubChannel #85]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1213 09:32:59.050000       1 logging.go:55] [core] [Channel #115 SubChannel #117]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1213 09:32:59.059710       1 logging.go:55] [core] [Channel #87 SubChannel #89]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1213 09:32:59.112782       1 logging.go:55] [core] [Channel #163 SubChannel #165]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1213 09:32:59.136210       1 logging.go:55] [core] [Channel #35 SubChannel #37]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1213 09:32:59.177829       1 logging.go:55] [core] [Channel #139 SubChannel #141]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1213 09:32:59.229503       1 logging.go:55] [core] [Channel #151 SubChannel #153]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1213 09:32:59.241149       1 logging.go:55] [core] [Channel #255 SubChannel #257]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1213 09:32:59.252494       1 logging.go:55] [core] [Channel #167 SubChannel #169]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1213 09:32:59.257821       1 logging.go:55] [core] [Channel #71 SubChannel #73]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1213 09:32:59.277534       1 logging.go:55] [core] [Channel #247 SubChannel #249]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	
	
	==> kube-apiserver [ea6f4d67228a] <==
		loading OpenAPI spec for "v1beta1.metrics.k8s.io" failed with: failed to download v1beta1.metrics.k8s.io: failed to retrieve openAPI spec, http error: ResponseCode: 503, Body: service unavailable
		, Header: map[Content-Type:[text/plain; charset=utf-8] X-Content-Type-Options:[nosniff]]
	 > logger="UnhandledError"
	I1213 09:33:45.920199       1 controller.go:109] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	I1213 09:33:48.019787       1 handler.go:285] Adding GroupVersion metrics.k8s.io v1beta1 to ResourceManager
	W1213 09:33:48.110189       1 lease.go:265] Resetting endpoints for master service "kubernetes" to [192.168.39.5]
	I1213 09:33:48.112210       1 controller.go:667] quota admission added evaluator for: endpoints
	I1213 09:33:48.682383       1 controller.go:667] quota admission added evaluator for: deployments.apps
	I1213 09:33:48.768099       1 controller.go:667] quota admission added evaluator for: daemonsets.apps
	I1213 09:33:48.837638       1 controller.go:667] quota admission added evaluator for: roles.rbac.authorization.k8s.io
	I1213 09:33:48.853781       1 controller.go:667] quota admission added evaluator for: rolebindings.rbac.authorization.k8s.io
	I1213 09:33:50.490654       1 controller.go:667] quota admission added evaluator for: replicasets.apps
	I1213 09:33:50.491397       1 controller.go:667] quota admission added evaluator for: endpointslices.discovery.k8s.io
	I1213 09:33:51.267285       1 controller.go:667] quota admission added evaluator for: namespaces
	I1213 09:33:52.310927       1 alloc.go:328] "allocated clusterIPs" service="kubernetes-dashboard/kubernetes-dashboard" clusterIPs={"IPv4":"10.97.138.136"}
	I1213 09:33:52.369329       1 alloc.go:328] "allocated clusterIPs" service="kubernetes-dashboard/dashboard-metrics-scraper" clusterIPs={"IPv4":"10.104.120.51"}
	W1213 09:34:49.882399       1 handler_proxy.go:99] no RequestInfo found in the context
	E1213 09:34:49.882606       1 controller.go:102] "Unhandled Error" err=<
		loading OpenAPI spec for "v1beta1.metrics.k8s.io" failed with: failed to download v1beta1.metrics.k8s.io: failed to retrieve openAPI spec, http error: ResponseCode: 503, Body: service unavailable
		, Header: map[Content-Type:[text/plain; charset=utf-8] X-Content-Type-Options:[nosniff]]
	 > logger="UnhandledError"
	I1213 09:34:49.882638       1 controller.go:109] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	W1213 09:34:49.888307       1 handler_proxy.go:99] no RequestInfo found in the context
	E1213 09:34:49.888364       1 controller.go:113] "Unhandled Error" err="loading OpenAPI spec for \"v1beta1.metrics.k8s.io\" failed with: Error, could not get list of group versions for APIService" logger="UnhandledError"
	I1213 09:34:49.888377       1 controller.go:126] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	
	
	==> kube-controller-manager [4b2a5a8f531e] <==
	I1213 09:31:53.336039       1 range_allocator.go:183] "Starting range CIDR allocator" logger="node-ipam-controller"
	I1213 09:31:53.336048       1 shared_informer.go:349] "Waiting for caches to sync" controller="cidrallocator"
	I1213 09:31:53.336056       1 shared_informer.go:356] "Caches are synced" controller="cidrallocator"
	I1213 09:31:53.343453       1 shared_informer.go:356] "Caches are synced" controller="service account"
	I1213 09:31:53.353435       1 shared_informer.go:356] "Caches are synced" controller="validatingadmissionpolicy-status"
	I1213 09:31:53.355442       1 shared_informer.go:356] "Caches are synced" controller="stateful set"
	I1213 09:31:53.355525       1 shared_informer.go:356] "Caches are synced" controller="bootstrap_signer"
	I1213 09:31:53.357255       1 shared_informer.go:356] "Caches are synced" controller="GC"
	I1213 09:31:53.358433       1 range_allocator.go:428] "Set node PodCIDR" logger="node-ipam-controller" node="embed-certs-594077" podCIDRs=["10.244.0.0/24"]
	I1213 09:31:53.360081       1 shared_informer.go:356] "Caches are synced" controller="persistent volume"
	I1213 09:31:53.360496       1 shared_informer.go:356] "Caches are synced" controller="PVC protection"
	I1213 09:31:53.358818       1 shared_informer.go:356] "Caches are synced" controller="resource_claim"
	I1213 09:31:53.361073       1 shared_informer.go:356] "Caches are synced" controller="namespace"
	I1213 09:31:53.361384       1 shared_informer.go:356] "Caches are synced" controller="expand"
	I1213 09:31:53.358828       1 shared_informer.go:356] "Caches are synced" controller="service-cidr-controller"
	I1213 09:31:53.361985       1 shared_informer.go:356] "Caches are synced" controller="attach detach"
	I1213 09:31:53.362214       1 shared_informer.go:356] "Caches are synced" controller="crt configmap"
	I1213 09:31:53.363737       1 shared_informer.go:356] "Caches are synced" controller="garbage collector"
	I1213 09:31:53.363994       1 shared_informer.go:356] "Caches are synced" controller="resource quota"
	I1213 09:31:53.364667       1 shared_informer.go:356] "Caches are synced" controller="ReplicaSet"
	I1213 09:31:53.370811       1 shared_informer.go:356] "Caches are synced" controller="endpoint_slice"
	I1213 09:31:53.370831       1 shared_informer.go:356] "Caches are synced" controller="garbage collector"
	I1213 09:31:53.370965       1 garbagecollector.go:154] "Garbage collector: all resource monitors have synced" logger="garbage-collector-controller"
	I1213 09:31:53.370971       1 garbagecollector.go:157] "Proceeding to collect garbage" logger="garbage-collector-controller"
	I1213 09:31:53.387013       1 shared_informer.go:356] "Caches are synced" controller="PV protection"
	
	
	==> kube-controller-manager [bcf2fd041677] <==
	I1213 09:33:50.374959       1 shared_informer.go:356] "Caches are synced" controller="certificate-csrsigning-legacy-unknown"
	I1213 09:33:50.374993       1 shared_informer.go:356] "Caches are synced" controller="certificate-csrsigning-kube-apiserver-client"
	I1213 09:33:50.375011       1 shared_informer.go:356] "Caches are synced" controller="ClusterRoleAggregator"
	I1213 09:33:50.375874       1 shared_informer.go:356] "Caches are synced" controller="PVC protection"
	I1213 09:33:50.403422       1 shared_informer.go:356] "Caches are synced" controller="attach detach"
	I1213 09:33:50.396647       1 shared_informer.go:356] "Caches are synced" controller="PV protection"
	I1213 09:33:50.396659       1 shared_informer.go:356] "Caches are synced" controller="endpoint_slice_mirroring"
	I1213 09:33:50.420341       1 shared_informer.go:356] "Caches are synced" controller="stateful set"
	I1213 09:33:50.435314       1 shared_informer.go:356] "Caches are synced" controller="resource_claim"
	I1213 09:33:50.442414       1 shared_informer.go:356] "Caches are synced" controller="garbage collector"
	I1213 09:33:50.443866       1 shared_informer.go:356] "Caches are synced" controller="endpoint"
	I1213 09:33:50.450389       1 shared_informer.go:356] "Caches are synced" controller="garbage collector"
	I1213 09:33:50.450482       1 garbagecollector.go:154] "Garbage collector: all resource monitors have synced" logger="garbage-collector-controller"
	I1213 09:33:50.450491       1 garbagecollector.go:157] "Proceeding to collect garbage" logger="garbage-collector-controller"
	E1213 09:33:51.616002       1 replica_set.go:587] "Unhandled Error" err="sync \"kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9\" failed with pods \"dashboard-metrics-scraper-6ffb444bf9-\" is forbidden: error looking up service account kubernetes-dashboard/kubernetes-dashboard: serviceaccount \"kubernetes-dashboard\" not found" logger="UnhandledError"
	E1213 09:33:51.697375       1 replica_set.go:587] "Unhandled Error" err="sync \"kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9\" failed with pods \"dashboard-metrics-scraper-6ffb444bf9-\" is forbidden: error looking up service account kubernetes-dashboard/kubernetes-dashboard: serviceaccount \"kubernetes-dashboard\" not found" logger="UnhandledError"
	E1213 09:33:51.734843       1 replica_set.go:587] "Unhandled Error" err="sync \"kubernetes-dashboard/kubernetes-dashboard-855c9754f9\" failed with pods \"kubernetes-dashboard-855c9754f9-\" is forbidden: error looking up service account kubernetes-dashboard/kubernetes-dashboard: serviceaccount \"kubernetes-dashboard\" not found" logger="UnhandledError"
	E1213 09:33:51.784632       1 replica_set.go:587] "Unhandled Error" err="sync \"kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9\" failed with pods \"dashboard-metrics-scraper-6ffb444bf9-\" is forbidden: error looking up service account kubernetes-dashboard/kubernetes-dashboard: serviceaccount \"kubernetes-dashboard\" not found" logger="UnhandledError"
	E1213 09:33:51.793970       1 replica_set.go:587] "Unhandled Error" err="sync \"kubernetes-dashboard/kubernetes-dashboard-855c9754f9\" failed with pods \"kubernetes-dashboard-855c9754f9-\" is forbidden: error looking up service account kubernetes-dashboard/kubernetes-dashboard: serviceaccount \"kubernetes-dashboard\" not found" logger="UnhandledError"
	E1213 09:33:51.832688       1 replica_set.go:587] "Unhandled Error" err="sync \"kubernetes-dashboard/kubernetes-dashboard-855c9754f9\" failed with pods \"kubernetes-dashboard-855c9754f9-\" is forbidden: error looking up service account kubernetes-dashboard/kubernetes-dashboard: serviceaccount \"kubernetes-dashboard\" not found" logger="UnhandledError"
	E1213 09:33:51.832688       1 replica_set.go:587] "Unhandled Error" err="sync \"kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9\" failed with pods \"dashboard-metrics-scraper-6ffb444bf9-\" is forbidden: error looking up service account kubernetes-dashboard/kubernetes-dashboard: serviceaccount \"kubernetes-dashboard\" not found" logger="UnhandledError"
	E1213 09:33:51.851357       1 replica_set.go:587] "Unhandled Error" err="sync \"kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9\" failed with pods \"dashboard-metrics-scraper-6ffb444bf9-\" is forbidden: error looking up service account kubernetes-dashboard/kubernetes-dashboard: serviceaccount \"kubernetes-dashboard\" not found" logger="UnhandledError"
	E1213 09:33:51.871143       1 replica_set.go:587] "Unhandled Error" err="sync \"kubernetes-dashboard/kubernetes-dashboard-855c9754f9\" failed with pods \"kubernetes-dashboard-855c9754f9-\" is forbidden: error looking up service account kubernetes-dashboard/kubernetes-dashboard: serviceaccount \"kubernetes-dashboard\" not found" logger="UnhandledError"
	E1213 09:34:50.051560       1 resource_quota_controller.go:446] "Unhandled Error" err="unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1" logger="UnhandledError"
	I1213 09:34:50.064726       1 garbagecollector.go:787] "failed to discover some groups" logger="garbage-collector-controller" groups="map[\"metrics.k8s.io/v1beta1\":\"stale GroupVersion discovery: metrics.k8s.io/v1beta1\"]"
	
	
	==> kube-proxy [08fadc68f466] <==
	I1213 09:31:56.508637       1 shared_informer.go:349] "Waiting for caches to sync" controller="node informer cache"
	I1213 09:31:56.609891       1 shared_informer.go:356] "Caches are synced" controller="node informer cache"
	I1213 09:31:56.609953       1 server.go:219] "Successfully retrieved NodeIPs" NodeIPs=["192.168.39.5"]
	E1213 09:31:56.610205       1 server.go:256] "Kube-proxy configuration may be incomplete or incorrect" err="nodePortAddresses is unset; NodePort connections will be accepted on all local IPs. Consider using `--nodeport-addresses primary`"
	I1213 09:31:56.804733       1 server_linux.go:103] "No iptables support for family" ipFamily="IPv6" error=<
		error listing chain "POSTROUTING" in table "nat": exit status 3: ip6tables v1.8.9 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
		Perhaps ip6tables or your kernel needs to be upgraded.
	 >
	I1213 09:31:56.804847       1 server.go:267] "kube-proxy running in single-stack mode" ipFamily="IPv4"
	I1213 09:31:56.804896       1 server_linux.go:132] "Using iptables Proxier"
	I1213 09:31:56.865819       1 proxier.go:242] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses" ipFamily="IPv4"
	I1213 09:31:56.878175       1 server.go:527] "Version info" version="v1.34.2"
	I1213 09:31:56.879530       1 server.go:529] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1213 09:31:56.898538       1 config.go:200] "Starting service config controller"
	I1213 09:31:56.898916       1 shared_informer.go:349] "Waiting for caches to sync" controller="service config"
	I1213 09:31:56.899076       1 config.go:106] "Starting endpoint slice config controller"
	I1213 09:31:56.899279       1 shared_informer.go:349] "Waiting for caches to sync" controller="endpoint slice config"
	I1213 09:31:56.899942       1 config.go:309] "Starting node config controller"
	I1213 09:31:56.900469       1 shared_informer.go:349] "Waiting for caches to sync" controller="node config"
	I1213 09:31:56.900662       1 shared_informer.go:356] "Caches are synced" controller="node config"
	I1213 09:31:56.906769       1 config.go:403] "Starting serviceCIDR config controller"
	I1213 09:31:56.908443       1 shared_informer.go:349] "Waiting for caches to sync" controller="serviceCIDR config"
	I1213 09:31:57.000158       1 shared_informer.go:356] "Caches are synced" controller="service config"
	I1213 09:31:57.001827       1 shared_informer.go:356] "Caches are synced" controller="endpoint slice config"
	I1213 09:31:57.009299       1 shared_informer.go:356] "Caches are synced" controller="serviceCIDR config"
	
	
	==> kube-proxy [3b9abac9a0e5] <==
	I1213 09:33:47.146798       1 shared_informer.go:349] "Waiting for caches to sync" controller="node informer cache"
	I1213 09:33:47.248054       1 shared_informer.go:356] "Caches are synced" controller="node informer cache"
	I1213 09:33:47.248124       1 server.go:219] "Successfully retrieved NodeIPs" NodeIPs=["192.168.39.5"]
	E1213 09:33:47.248660       1 server.go:256] "Kube-proxy configuration may be incomplete or incorrect" err="nodePortAddresses is unset; NodePort connections will be accepted on all local IPs. Consider using `--nodeport-addresses primary`"
	I1213 09:33:47.305149       1 server_linux.go:103] "No iptables support for family" ipFamily="IPv6" error=<
		error listing chain "POSTROUTING" in table "nat": exit status 3: ip6tables v1.8.9 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
		Perhaps ip6tables or your kernel needs to be upgraded.
	 >
	I1213 09:33:47.305245       1 server.go:267] "kube-proxy running in single-stack mode" ipFamily="IPv4"
	I1213 09:33:47.305313       1 server_linux.go:132] "Using iptables Proxier"
	I1213 09:33:47.321084       1 proxier.go:242] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses" ipFamily="IPv4"
	I1213 09:33:47.321935       1 server.go:527] "Version info" version="v1.34.2"
	I1213 09:33:47.321978       1 server.go:529] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1213 09:33:47.329818       1 config.go:309] "Starting node config controller"
	I1213 09:33:47.329864       1 shared_informer.go:349] "Waiting for caches to sync" controller="node config"
	I1213 09:33:47.329872       1 shared_informer.go:356] "Caches are synced" controller="node config"
	I1213 09:33:47.330417       1 config.go:200] "Starting service config controller"
	I1213 09:33:47.330506       1 shared_informer.go:349] "Waiting for caches to sync" controller="service config"
	I1213 09:33:47.330530       1 config.go:106] "Starting endpoint slice config controller"
	I1213 09:33:47.330533       1 shared_informer.go:349] "Waiting for caches to sync" controller="endpoint slice config"
	I1213 09:33:47.330543       1 config.go:403] "Starting serviceCIDR config controller"
	I1213 09:33:47.330546       1 shared_informer.go:349] "Waiting for caches to sync" controller="serviceCIDR config"
	I1213 09:33:47.431478       1 shared_informer.go:356] "Caches are synced" controller="serviceCIDR config"
	I1213 09:33:47.431478       1 shared_informer.go:356] "Caches are synced" controller="service config"
	I1213 09:33:47.431538       1 shared_informer.go:356] "Caches are synced" controller="endpoint slice config"
	
	
	==> kube-scheduler [8cac4fb32902] <==
	I1213 09:33:44.776239       1 server.go:177] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1213 09:33:44.788584       1 configmap_cafile_content.go:205] "Starting controller" name="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I1213 09:33:44.789576       1 shared_informer.go:349] "Waiting for caches to sync" controller="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I1213 09:33:44.793480       1 secure_serving.go:211] Serving securely on 127.0.0.1:10259
	I1213 09:33:44.793495       1 tlsconfig.go:243] "Starting DynamicServingCertificateController"
	E1213 09:33:44.834891       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Namespace: namespaces is forbidden: User \"system:kube-scheduler\" cannot list resource \"namespaces\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Namespace"
	E1213 09:33:44.835669       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User \"system:kube-scheduler\" cannot list resource \"poddisruptionbudgets\" in API group \"policy\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.PodDisruptionBudget"
	E1213 09:33:44.836791       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Node: nodes is forbidden: User \"system:kube-scheduler\" cannot list resource \"nodes\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Node"
	E1213 09:33:44.836887       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Pod: pods is forbidden: User \"system:kube-scheduler\" cannot list resource \"pods\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Pod"
	E1213 09:33:44.836961       1 reflector.go:205] "Failed to watch" err="failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csinodes\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSINode"
	E1213 09:33:44.837040       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ResourceClaim: resourceclaims.resource.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"resourceclaims\" in API group \"resource.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ResourceClaim"
	E1213 09:33:44.837096       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ResourceSlice: resourceslices.resource.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"resourceslices\" in API group \"resource.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ResourceSlice"
	E1213 09:33:44.837155       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User \"system:kube-scheduler\" cannot list resource \"persistentvolumes\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.PersistentVolume"
	E1213 09:33:44.837891       1 reflector.go:205] "Failed to watch" err="failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csidrivers\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIDriver"
	E1213 09:33:44.838140       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ConfigMap: configmaps \"extension-apiserver-authentication\" is forbidden: User \"system:kube-scheduler\" cannot list resource \"configmaps\" in API group \"\" in the namespace \"kube-system\"" logger="UnhandledError" reflector="runtime/asm_amd64.s:1700" type="*v1.ConfigMap"
	E1213 09:33:44.838370       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User \"system:kube-scheduler\" cannot list resource \"persistentvolumeclaims\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.PersistentVolumeClaim"
	E1213 09:33:44.838427       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User \"system:kube-scheduler\" cannot list resource \"replicationcontrollers\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ReplicationController"
	E1213 09:33:44.838516       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User \"system:kube-scheduler\" cannot list resource \"replicasets\" in API group \"apps\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ReplicaSet"
	E1213 09:33:44.838546       1 reflector.go:205] "Failed to watch" err="failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"storageclasses\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.StorageClass"
	E1213 09:33:44.838598       1 reflector.go:205] "Failed to watch" err="failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csistoragecapacities\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIStorageCapacity"
	E1213 09:33:44.840424       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Service: services is forbidden: User \"system:kube-scheduler\" cannot list resource \"services\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Service"
	E1213 09:33:44.840886       1 reflector.go:205] "Failed to watch" err="failed to list *v1.VolumeAttachment: volumeattachments.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"volumeattachments\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.VolumeAttachment"
	E1213 09:33:44.842576       1 reflector.go:205] "Failed to watch" err="failed to list *v1.DeviceClass: deviceclasses.resource.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"deviceclasses\" in API group \"resource.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.DeviceClass"
	E1213 09:33:44.842610       1 reflector.go:205] "Failed to watch" err="failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User \"system:kube-scheduler\" cannot list resource \"statefulsets\" in API group \"apps\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.StatefulSet"
	I1213 09:33:46.493666       1 shared_informer.go:356] "Caches are synced" controller="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	
	
	==> kube-scheduler [cf9e0b0dcbf9] <==
	E1213 09:31:45.673059       1 reflector.go:205] "Failed to watch" err="failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csidrivers\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIDriver"
	E1213 09:31:45.672922       1 reflector.go:205] "Failed to watch" err="failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User \"system:kube-scheduler\" cannot list resource \"statefulsets\" in API group \"apps\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.StatefulSet"
	E1213 09:31:45.674367       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User \"system:kube-scheduler\" cannot list resource \"persistentvolumes\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.PersistentVolume"
	E1213 09:31:45.674656       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User \"system:kube-scheduler\" cannot list resource \"replicasets\" in API group \"apps\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ReplicaSet"
	E1213 09:31:45.675121       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User \"system:kube-scheduler\" cannot list resource \"poddisruptionbudgets\" in API group \"policy\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.PodDisruptionBudget"
	E1213 09:31:45.675359       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Node: nodes is forbidden: User \"system:kube-scheduler\" cannot list resource \"nodes\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Node"
	E1213 09:31:46.593861       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User \"system:kube-scheduler\" cannot list resource \"replicationcontrollers\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ReplicationController"
	E1213 09:31:46.620372       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Namespace: namespaces is forbidden: User \"system:kube-scheduler\" cannot list resource \"namespaces\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Namespace"
	E1213 09:31:46.637400       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User \"system:kube-scheduler\" cannot list resource \"persistentvolumes\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.PersistentVolume"
	E1213 09:31:46.671885       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User \"system:kube-scheduler\" cannot list resource \"replicasets\" in API group \"apps\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ReplicaSet"
	E1213 09:31:46.675254       1 reflector.go:205] "Failed to watch" err="failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User \"system:kube-scheduler\" cannot list resource \"statefulsets\" in API group \"apps\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.StatefulSet"
	E1213 09:31:46.679901       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ResourceClaim: resourceclaims.resource.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"resourceclaims\" in API group \"resource.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ResourceClaim"
	E1213 09:31:46.741636       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Node: nodes is forbidden: User \"system:kube-scheduler\" cannot list resource \"nodes\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Node"
	E1213 09:31:46.785658       1 reflector.go:205] "Failed to watch" err="failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"storageclasses\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.StorageClass"
	E1213 09:31:46.807123       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Service: services is forbidden: User \"system:kube-scheduler\" cannot list resource \"services\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Service"
	E1213 09:31:46.817797       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User \"system:kube-scheduler\" cannot list resource \"persistentvolumeclaims\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.PersistentVolumeClaim"
	E1213 09:31:46.982287       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ConfigMap: configmaps \"extension-apiserver-authentication\" is forbidden: User \"system:kube-scheduler\" cannot list resource \"configmaps\" in API group \"\" in the namespace \"kube-system\"" logger="UnhandledError" reflector="runtime/asm_amd64.s:1700" type="*v1.ConfigMap"
	E1213 09:31:47.004930       1 reflector.go:205] "Failed to watch" err="failed to list *v1.DeviceClass: deviceclasses.resource.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"deviceclasses\" in API group \"resource.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.DeviceClass"
	E1213 09:31:47.032549       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Pod: pods is forbidden: User \"system:kube-scheduler\" cannot list resource \"pods\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Pod"
	I1213 09:31:50.148815       1 shared_informer.go:356] "Caches are synced" controller="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I1213 09:32:49.158251       1 secure_serving.go:259] Stopped listening on 127.0.0.1:10259
	I1213 09:32:49.158342       1 server.go:263] "[graceful-termination] secure server has stopped listening"
	I1213 09:32:49.158389       1 tlsconfig.go:258] "Shutting down DynamicServingCertificateController"
	I1213 09:32:49.158466       1 server.go:265] "[graceful-termination] secure server is exiting"
	E1213 09:32:49.158499       1 run.go:72] "command failed" err="finished without leader elect"
	
	
	==> kubelet <==
	Dec 13 09:34:51 embed-certs-594077 kubelet[4246]: I1213 09:34:51.476932    4246 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/e72f6729e76bef19c11e9f77a5bbfed1-ca-certs\") pod \"kube-controller-manager-embed-certs-594077\" (UID: \"e72f6729e76bef19c11e9f77a5bbfed1\") " pod="kube-system/kube-controller-manager-embed-certs-594077"
	Dec 13 09:34:51 embed-certs-594077 kubelet[4246]: I1213 09:34:51.476983    4246 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/e72f6729e76bef19c11e9f77a5bbfed1-flexvolume-dir\") pod \"kube-controller-manager-embed-certs-594077\" (UID: \"e72f6729e76bef19c11e9f77a5bbfed1\") " pod="kube-system/kube-controller-manager-embed-certs-594077"
	Dec 13 09:34:51 embed-certs-594077 kubelet[4246]: I1213 09:34:51.477010    4246 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/2791db0168e112ad1ebb49d47ad7acc4-kubeconfig\") pod \"kube-scheduler-embed-certs-594077\" (UID: \"2791db0168e112ad1ebb49d47ad7acc4\") " pod="kube-system/kube-scheduler-embed-certs-594077"
	Dec 13 09:34:51 embed-certs-594077 kubelet[4246]: I1213 09:34:51.477028    4246 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etcd-certs\" (UniqueName: \"kubernetes.io/host-path/37016cb987c6e71339040b3541624dd3-etcd-certs\") pod \"etcd-embed-certs-594077\" (UID: \"37016cb987c6e71339040b3541624dd3\") " pod="kube-system/etcd-embed-certs-594077"
	Dec 13 09:34:51 embed-certs-594077 kubelet[4246]: I1213 09:34:51.477043    4246 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/fab6b178c44aca2623e51c535287727e-k8s-certs\") pod \"kube-apiserver-embed-certs-594077\" (UID: \"fab6b178c44aca2623e51c535287727e\") " pod="kube-system/kube-apiserver-embed-certs-594077"
	Dec 13 09:34:51 embed-certs-594077 kubelet[4246]: I1213 09:34:51.477057    4246 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/fab6b178c44aca2623e51c535287727e-usr-share-ca-certificates\") pod \"kube-apiserver-embed-certs-594077\" (UID: \"fab6b178c44aca2623e51c535287727e\") " pod="kube-system/kube-apiserver-embed-certs-594077"
	Dec 13 09:34:51 embed-certs-594077 kubelet[4246]: I1213 09:34:51.477075    4246 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/e72f6729e76bef19c11e9f77a5bbfed1-k8s-certs\") pod \"kube-controller-manager-embed-certs-594077\" (UID: \"e72f6729e76bef19c11e9f77a5bbfed1\") " pod="kube-system/kube-controller-manager-embed-certs-594077"
	Dec 13 09:34:51 embed-certs-594077 kubelet[4246]: I1213 09:34:51.477090    4246 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/e72f6729e76bef19c11e9f77a5bbfed1-kubeconfig\") pod \"kube-controller-manager-embed-certs-594077\" (UID: \"e72f6729e76bef19c11e9f77a5bbfed1\") " pod="kube-system/kube-controller-manager-embed-certs-594077"
	Dec 13 09:34:51 embed-certs-594077 kubelet[4246]: I1213 09:34:51.477105    4246 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/e72f6729e76bef19c11e9f77a5bbfed1-usr-share-ca-certificates\") pod \"kube-controller-manager-embed-certs-594077\" (UID: \"e72f6729e76bef19c11e9f77a5bbfed1\") " pod="kube-system/kube-controller-manager-embed-certs-594077"
	Dec 13 09:34:51 embed-certs-594077 kubelet[4246]: I1213 09:34:51.477118    4246 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etcd-data\" (UniqueName: \"kubernetes.io/host-path/37016cb987c6e71339040b3541624dd3-etcd-data\") pod \"etcd-embed-certs-594077\" (UID: \"37016cb987c6e71339040b3541624dd3\") " pod="kube-system/etcd-embed-certs-594077"
	Dec 13 09:34:51 embed-certs-594077 kubelet[4246]: I1213 09:34:51.477149    4246 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/fab6b178c44aca2623e51c535287727e-ca-certs\") pod \"kube-apiserver-embed-certs-594077\" (UID: \"fab6b178c44aca2623e51c535287727e\") " pod="kube-system/kube-apiserver-embed-certs-594077"
	Dec 13 09:34:51 embed-certs-594077 kubelet[4246]: I1213 09:34:51.552903    4246 apiserver.go:52] "Watching apiserver"
	Dec 13 09:34:51 embed-certs-594077 kubelet[4246]: I1213 09:34:51.598581    4246 desired_state_of_world_populator.go:154] "Finished populating initial desired state of world"
	Dec 13 09:34:51 embed-certs-594077 kubelet[4246]: I1213 09:34:51.678949    4246 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/c24123e2-281a-4e07-83eb-bf2a70ed9689-lib-modules\") pod \"kube-proxy-gbh4v\" (UID: \"c24123e2-281a-4e07-83eb-bf2a70ed9689\") " pod="kube-system/kube-proxy-gbh4v"
	Dec 13 09:34:51 embed-certs-594077 kubelet[4246]: I1213 09:34:51.679096    4246 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/c24123e2-281a-4e07-83eb-bf2a70ed9689-xtables-lock\") pod \"kube-proxy-gbh4v\" (UID: \"c24123e2-281a-4e07-83eb-bf2a70ed9689\") " pod="kube-system/kube-proxy-gbh4v"
	Dec 13 09:34:51 embed-certs-594077 kubelet[4246]: I1213 09:34:51.679116    4246 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tmp\" (UniqueName: \"kubernetes.io/host-path/1db9cb1e-bc7a-4d9f-9042-936fcad750f7-tmp\") pod \"storage-provisioner\" (UID: \"1db9cb1e-bc7a-4d9f-9042-936fcad750f7\") " pod="kube-system/storage-provisioner"
	Dec 13 09:34:51 embed-certs-594077 kubelet[4246]: I1213 09:34:51.864587    4246 scope.go:117] "RemoveContainer" containerID="de05857e10ed14d338591b8d140c8fdbffcc13e5cdf3dc4d04b3f6eabfd47af5"
	Dec 13 09:34:52 embed-certs-594077 kubelet[4246]: E1213 09:34:52.113944    4246 log.go:32] "PullImage from image service failed" err="rpc error: code = Unknown desc = Docker Image Format v1 and Docker Image manifest version 2, schema 1 support has been removed. Suggest the author of registry.k8s.io/echoserver:1.4 to upgrade the image to the OCI Format or Docker Image manifest v2, schema 2. More information at https://docs.docker.com/go/deprecated-image-specs/" image="registry.k8s.io/echoserver:1.4"
	Dec 13 09:34:52 embed-certs-594077 kubelet[4246]: E1213 09:34:52.114004    4246 kuberuntime_image.go:43] "Failed to pull image" err="Docker Image Format v1 and Docker Image manifest version 2, schema 1 support has been removed. Suggest the author of registry.k8s.io/echoserver:1.4 to upgrade the image to the OCI Format or Docker Image manifest v2, schema 2. More information at https://docs.docker.com/go/deprecated-image-specs/" image="registry.k8s.io/echoserver:1.4"
	Dec 13 09:34:52 embed-certs-594077 kubelet[4246]: E1213 09:34:52.114210    4246 kuberuntime_manager.go:1449] "Unhandled Error" err="container dashboard-metrics-scraper start failed in pod dashboard-metrics-scraper-6ffb444bf9-42zcv_kubernetes-dashboard(7123ef17-ca61-4aa4-a10e-b29ec51a6667): ErrImagePull: Docker Image Format v1 and Docker Image manifest version 2, schema 1 support has been removed. Suggest the author of registry.k8s.io/echoserver:1.4 to upgrade the image to the OCI Format or Docker Image manifest v2, schema 2. More information at https://docs.docker.com/go/deprecated-image-specs/" logger="UnhandledError"
	Dec 13 09:34:52 embed-certs-594077 kubelet[4246]: E1213 09:34:52.114247    4246 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with ErrImagePull: \"Docker Image Format v1 and Docker Image manifest version 2, schema 1 support has been removed. Suggest the author of registry.k8s.io/echoserver:1.4 to upgrade the image to the OCI Format or Docker Image manifest v2, schema 2. More information at https://docs.docker.com/go/deprecated-image-specs/\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-42zcv" podUID="7123ef17-ca61-4aa4-a10e-b29ec51a6667"
	Dec 13 09:34:52 embed-certs-594077 kubelet[4246]: E1213 09:34:52.153914    4246 log.go:32] "PullImage from image service failed" err="rpc error: code = Unknown desc = Error response from daemon: Get \"https://fake.domain/v2/\": dial tcp: lookup fake.domain on 192.168.122.1:53: no such host" image="fake.domain/registry.k8s.io/echoserver:1.4"
	Dec 13 09:34:52 embed-certs-594077 kubelet[4246]: E1213 09:34:52.154760    4246 kuberuntime_image.go:43] "Failed to pull image" err="Error response from daemon: Get \"https://fake.domain/v2/\": dial tcp: lookup fake.domain on 192.168.122.1:53: no such host" image="fake.domain/registry.k8s.io/echoserver:1.4"
	Dec 13 09:34:52 embed-certs-594077 kubelet[4246]: E1213 09:34:52.154958    4246 kuberuntime_manager.go:1449] "Unhandled Error" err="container metrics-server start failed in pod metrics-server-746fcd58dc-r9qzb_kube-system(25f7da03-5692-48c6-8b6e-22b84e1aec43): ErrImagePull: Error response from daemon: Get \"https://fake.domain/v2/\": dial tcp: lookup fake.domain on 192.168.122.1:53: no such host" logger="UnhandledError"
	Dec 13 09:34:52 embed-certs-594077 kubelet[4246]: E1213 09:34:52.155038    4246 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ErrImagePull: \"Error response from daemon: Get \\\"https://fake.domain/v2/\\\": dial tcp: lookup fake.domain on 192.168.122.1:53: no such host\"" pod="kube-system/metrics-server-746fcd58dc-r9qzb" podUID="25f7da03-5692-48c6-8b6e-22b84e1aec43"
	
	
	==> kubernetes-dashboard [9d744ae1656d] <==
	2025/12/13 09:34:04 Using namespace: kubernetes-dashboard
	2025/12/13 09:34:04 Using in-cluster config to connect to apiserver
	2025/12/13 09:34:04 Using secret token for csrf signing
	2025/12/13 09:34:04 Initializing csrf token from kubernetes-dashboard-csrf secret
	2025/12/13 09:34:04 Empty token. Generating and storing in a secret kubernetes-dashboard-csrf
	2025/12/13 09:34:04 Successful initial request to the apiserver, version: v1.34.2
	2025/12/13 09:34:04 Generating JWE encryption key
	2025/12/13 09:34:04 New synchronizer has been registered: kubernetes-dashboard-key-holder-kubernetes-dashboard. Starting
	2025/12/13 09:34:04 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kubernetes-dashboard
	2025/12/13 09:34:04 Initializing JWE encryption key from synchronized object
	2025/12/13 09:34:04 Creating in-cluster Sidecar client
	2025/12/13 09:34:04 Metric client health check failed: the server is currently unable to handle the request (get services dashboard-metrics-scraper). Retrying in 30 seconds.
	2025/12/13 09:34:04 Serving insecurely on HTTP port: 9090
	2025/12/13 09:34:50 Metric client health check failed: the server is currently unable to handle the request (get services dashboard-metrics-scraper). Retrying in 30 seconds.
	2025/12/13 09:34:04 Starting overwatch
	
	
	==> storage-provisioner [2f643a44c094] <==
	I1213 09:34:52.258732       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	I1213 09:34:52.299738       1 storage_provisioner.go:141] Storage provisioner initialized, now starting service!
	I1213 09:34:52.300284       1 leaderelection.go:243] attempting to acquire leader lease kube-system/k8s.io-minikube-hostpath...
	W1213 09:34:52.305497       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	
	
	==> storage-provisioner [de05857e10ed] <==
	I1213 09:33:46.911515       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	F1213 09:34:16.919520       1 main.go:39] error getting server version: Get "https://10.96.0.1:443/version?timeout=32s": dial tcp 10.96.0.1:443: i/o timeout
	

                                                
                                                
-- /stdout --
helpers_test.go:263: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p embed-certs-594077 -n embed-certs-594077
helpers_test.go:270: (dbg) Run:  kubectl --context embed-certs-594077 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:281: non-running pods: metrics-server-746fcd58dc-r9qzb dashboard-metrics-scraper-6ffb444bf9-42zcv
helpers_test.go:283: ======> post-mortem[TestStartStop/group/embed-certs/serial/Pause]: describe non-running pods <======
helpers_test.go:286: (dbg) Run:  kubectl --context embed-certs-594077 describe pod metrics-server-746fcd58dc-r9qzb dashboard-metrics-scraper-6ffb444bf9-42zcv
helpers_test.go:286: (dbg) Non-zero exit: kubectl --context embed-certs-594077 describe pod metrics-server-746fcd58dc-r9qzb dashboard-metrics-scraper-6ffb444bf9-42zcv: exit status 1 (66.070866ms)

                                                
                                                
** stderr ** 
	Error from server (NotFound): pods "metrics-server-746fcd58dc-r9qzb" not found
	Error from server (NotFound): pods "dashboard-metrics-scraper-6ffb444bf9-42zcv" not found

                                                
                                                
** /stderr **
helpers_test.go:288: kubectl --context embed-certs-594077 describe pod metrics-server-746fcd58dc-r9qzb dashboard-metrics-scraper-6ffb444bf9-42zcv: exit status 1
--- FAIL: TestStartStop/group/embed-certs/serial/Pause (39.30s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/Pause (41.21s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/Pause
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-amd64 pause -p default-k8s-diff-port-018953 --alsologtostderr -v=1
E1213 09:36:07.022670   13307 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22128-9390/.minikube/profiles/old-k8s-version-394980/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1213 09:36:07.029234   13307 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22128-9390/.minikube/profiles/old-k8s-version-394980/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1213 09:36:07.041062   13307 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22128-9390/.minikube/profiles/old-k8s-version-394980/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1213 09:36:07.063193   13307 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22128-9390/.minikube/profiles/old-k8s-version-394980/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1213 09:36:07.104748   13307 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22128-9390/.minikube/profiles/old-k8s-version-394980/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1213 09:36:07.186381   13307 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22128-9390/.minikube/profiles/old-k8s-version-394980/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1213 09:36:07.347926   13307 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22128-9390/.minikube/profiles/old-k8s-version-394980/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1213 09:36:07.669583   13307 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22128-9390/.minikube/profiles/old-k8s-version-394980/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
start_stop_delete_test.go:309: (dbg) Done: out/minikube-linux-amd64 pause -p default-k8s-diff-port-018953 --alsologtostderr -v=1: (1.735227058s)
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p default-k8s-diff-port-018953 -n default-k8s-diff-port-018953
E1213 09:36:08.311733   13307 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22128-9390/.minikube/profiles/old-k8s-version-394980/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1213 09:36:09.593969   13307 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22128-9390/.minikube/profiles/old-k8s-version-394980/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
start_stop_delete_test.go:309: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.APIServer}} -p default-k8s-diff-port-018953 -n default-k8s-diff-port-018953: exit status 2 (16.025318624s)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:309: status error: exit status 2 (may be ok)
start_stop_delete_test.go:309: post-pause apiserver status = "Stopped"; want = "Paused"
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Kubelet}} -p default-k8s-diff-port-018953 -n default-k8s-diff-port-018953
E1213 09:36:24.268027   13307 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22128-9390/.minikube/profiles/no-preload-616969/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1213 09:36:24.274568   13307 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22128-9390/.minikube/profiles/no-preload-616969/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1213 09:36:24.286233   13307 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22128-9390/.minikube/profiles/no-preload-616969/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1213 09:36:24.308371   13307 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22128-9390/.minikube/profiles/no-preload-616969/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1213 09:36:24.349969   13307 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22128-9390/.minikube/profiles/no-preload-616969/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1213 09:36:24.432169   13307 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22128-9390/.minikube/profiles/no-preload-616969/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1213 09:36:24.594399   13307 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22128-9390/.minikube/profiles/no-preload-616969/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1213 09:36:24.916155   13307 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22128-9390/.minikube/profiles/no-preload-616969/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1213 09:36:25.558164   13307 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22128-9390/.minikube/profiles/no-preload-616969/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1213 09:36:26.840656   13307 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22128-9390/.minikube/profiles/no-preload-616969/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1213 09:36:27.518949   13307 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22128-9390/.minikube/profiles/old-k8s-version-394980/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1213 09:36:29.402477   13307 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22128-9390/.minikube/profiles/no-preload-616969/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
start_stop_delete_test.go:309: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Kubelet}} -p default-k8s-diff-port-018953 -n default-k8s-diff-port-018953: exit status 2 (15.910690108s)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:309: status error: exit status 2 (may be ok)
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-amd64 unpause -p default-k8s-diff-port-018953 --alsologtostderr -v=1
start_stop_delete_test.go:309: (dbg) Done: out/minikube-linux-amd64 unpause -p default-k8s-diff-port-018953 --alsologtostderr -v=1: (1.049850833s)
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p default-k8s-diff-port-018953 -n default-k8s-diff-port-018953
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Kubelet}} -p default-k8s-diff-port-018953 -n default-k8s-diff-port-018953
helpers_test.go:223: -----------------------post-mortem--------------------------------
helpers_test.go:224: ======>  post-mortem[TestStartStop/group/default-k8s-diff-port/serial/Pause]: network settings <======
helpers_test.go:231: HOST ENV snapshots: PROXY env: HTTP_PROXY="<empty>" HTTPS_PROXY="<empty>" NO_PROXY="<empty>"
helpers_test.go:248: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p default-k8s-diff-port-018953 -n default-k8s-diff-port-018953
helpers_test.go:253: <<< TestStartStop/group/default-k8s-diff-port/serial/Pause FAILED: start of post-mortem logs <<<
helpers_test.go:254: ======>  post-mortem[TestStartStop/group/default-k8s-diff-port/serial/Pause]: minikube logs <======
helpers_test.go:256: (dbg) Run:  out/minikube-linux-amd64 -p default-k8s-diff-port-018953 logs -n 25
helpers_test.go:256: (dbg) Done: out/minikube-linux-amd64 -p default-k8s-diff-port-018953 logs -n 25: (2.179119171s)
helpers_test.go:261: TestStartStop/group/default-k8s-diff-port/serial/Pause logs: 
-- stdout --
	
	==> Audit <==
	┌─────────┬──────────────────────────────────────────────────────────────────────────────────────────────────────────┬──────────────────────────────┬─────────┬─────────┬─────────────────────┬─────────────────────┐
	│ COMMAND │                                                   ARGS                                                   │           PROFILE            │  USER   │ VERSION │     START TIME      │      END TIME       │
	├─────────┼──────────────────────────────────────────────────────────────────────────────────────────────────────────┼──────────────────────────────┼─────────┼─────────┼─────────────────────┼─────────────────────┤
	│ ssh     │ -p kindnet-949855 sudo systemctl cat kubelet --no-pager                                                  │ kindnet-949855               │ jenkins │ v1.37.0 │ 13 Dec 25 09:36 UTC │ 13 Dec 25 09:36 UTC │
	│ ssh     │ -p kindnet-949855 sudo journalctl -xeu kubelet --all --full --no-pager                                   │ kindnet-949855               │ jenkins │ v1.37.0 │ 13 Dec 25 09:36 UTC │ 13 Dec 25 09:36 UTC │
	│ ssh     │ -p kindnet-949855 sudo cat /etc/kubernetes/kubelet.conf                                                  │ kindnet-949855               │ jenkins │ v1.37.0 │ 13 Dec 25 09:36 UTC │ 13 Dec 25 09:36 UTC │
	│ ssh     │ -p kindnet-949855 sudo cat /var/lib/kubelet/config.yaml                                                  │ kindnet-949855               │ jenkins │ v1.37.0 │ 13 Dec 25 09:36 UTC │ 13 Dec 25 09:36 UTC │
	│ ssh     │ -p kindnet-949855 sudo systemctl status docker --all --full --no-pager                                   │ kindnet-949855               │ jenkins │ v1.37.0 │ 13 Dec 25 09:36 UTC │ 13 Dec 25 09:36 UTC │
	│ ssh     │ -p kindnet-949855 sudo systemctl cat docker --no-pager                                                   │ kindnet-949855               │ jenkins │ v1.37.0 │ 13 Dec 25 09:36 UTC │ 13 Dec 25 09:36 UTC │
	│ ssh     │ -p kindnet-949855 sudo cat /etc/docker/daemon.json                                                       │ kindnet-949855               │ jenkins │ v1.37.0 │ 13 Dec 25 09:36 UTC │ 13 Dec 25 09:36 UTC │
	│ ssh     │ -p kindnet-949855 sudo docker system info                                                                │ kindnet-949855               │ jenkins │ v1.37.0 │ 13 Dec 25 09:36 UTC │ 13 Dec 25 09:36 UTC │
	│ ssh     │ -p kindnet-949855 sudo systemctl status cri-docker --all --full --no-pager                               │ kindnet-949855               │ jenkins │ v1.37.0 │ 13 Dec 25 09:36 UTC │ 13 Dec 25 09:36 UTC │
	│ ssh     │ -p kindnet-949855 sudo systemctl cat cri-docker --no-pager                                               │ kindnet-949855               │ jenkins │ v1.37.0 │ 13 Dec 25 09:36 UTC │ 13 Dec 25 09:36 UTC │
	│ ssh     │ -p kindnet-949855 sudo cat /etc/systemd/system/cri-docker.service.d/10-cni.conf                          │ kindnet-949855               │ jenkins │ v1.37.0 │ 13 Dec 25 09:36 UTC │ 13 Dec 25 09:36 UTC │
	│ ssh     │ -p kindnet-949855 sudo cat /usr/lib/systemd/system/cri-docker.service                                    │ kindnet-949855               │ jenkins │ v1.37.0 │ 13 Dec 25 09:36 UTC │ 13 Dec 25 09:36 UTC │
	│ ssh     │ -p kindnet-949855 sudo cri-dockerd --version                                                             │ kindnet-949855               │ jenkins │ v1.37.0 │ 13 Dec 25 09:36 UTC │ 13 Dec 25 09:36 UTC │
	│ ssh     │ -p kindnet-949855 sudo systemctl status containerd --all --full --no-pager                               │ kindnet-949855               │ jenkins │ v1.37.0 │ 13 Dec 25 09:36 UTC │ 13 Dec 25 09:36 UTC │
	│ ssh     │ -p kindnet-949855 sudo systemctl cat containerd --no-pager                                               │ kindnet-949855               │ jenkins │ v1.37.0 │ 13 Dec 25 09:36 UTC │ 13 Dec 25 09:36 UTC │
	│ ssh     │ -p kindnet-949855 sudo cat /lib/systemd/system/containerd.service                                        │ kindnet-949855               │ jenkins │ v1.37.0 │ 13 Dec 25 09:36 UTC │ 13 Dec 25 09:36 UTC │
	│ ssh     │ -p kindnet-949855 sudo cat /etc/containerd/config.toml                                                   │ kindnet-949855               │ jenkins │ v1.37.0 │ 13 Dec 25 09:36 UTC │ 13 Dec 25 09:36 UTC │
	│ ssh     │ -p kindnet-949855 sudo containerd config dump                                                            │ kindnet-949855               │ jenkins │ v1.37.0 │ 13 Dec 25 09:36 UTC │ 13 Dec 25 09:36 UTC │
	│ ssh     │ -p kindnet-949855 sudo systemctl status crio --all --full --no-pager                                     │ kindnet-949855               │ jenkins │ v1.37.0 │ 13 Dec 25 09:36 UTC │                     │
	│ ssh     │ -p kindnet-949855 sudo systemctl cat crio --no-pager                                                     │ kindnet-949855               │ jenkins │ v1.37.0 │ 13 Dec 25 09:36 UTC │ 13 Dec 25 09:36 UTC │
	│ ssh     │ -p kindnet-949855 sudo find /etc/crio -type f -exec sh -c 'echo {}; cat {}' \;                           │ kindnet-949855               │ jenkins │ v1.37.0 │ 13 Dec 25 09:36 UTC │ 13 Dec 25 09:36 UTC │
	│ ssh     │ -p kindnet-949855 sudo crio config                                                                       │ kindnet-949855               │ jenkins │ v1.37.0 │ 13 Dec 25 09:36 UTC │ 13 Dec 25 09:36 UTC │
	│ delete  │ -p kindnet-949855                                                                                        │ kindnet-949855               │ jenkins │ v1.37.0 │ 13 Dec 25 09:36 UTC │ 13 Dec 25 09:36 UTC │
	│ start   │ -p false-949855 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=false --driver=kvm2 │ false-949855                 │ jenkins │ v1.37.0 │ 13 Dec 25 09:36 UTC │                     │
	│ unpause │ -p default-k8s-diff-port-018953 --alsologtostderr -v=1                                                   │ default-k8s-diff-port-018953 │ jenkins │ v1.37.0 │ 13 Dec 25 09:36 UTC │ 13 Dec 25 09:36 UTC │
	└─────────┴──────────────────────────────────────────────────────────────────────────────────────────────────────────┴──────────────────────────────┴─────────┴─────────┴─────────────────────┴─────────────────────┘
	
	
	==> Last Start <==
	Log file created at: 2025/12/13 09:36:32
	Running on machine: ubuntu-20-agent-3
	Binary: Built with gc go1.25.5 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1213 09:36:32.871102   52766 out.go:360] Setting OutFile to fd 1 ...
	I1213 09:36:32.871413   52766 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1213 09:36:32.871426   52766 out.go:374] Setting ErrFile to fd 2...
	I1213 09:36:32.871430   52766 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1213 09:36:32.871678   52766 root.go:338] Updating PATH: /home/jenkins/minikube-integration/22128-9390/.minikube/bin
	I1213 09:36:32.872268   52766 out.go:368] Setting JSON to false
	I1213 09:36:32.873214   52766 start.go:133] hostinfo: {"hostname":"ubuntu-20-agent-3","uptime":4743,"bootTime":1765613850,"procs":201,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"22.04","kernelVersion":"6.8.0-1045-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I1213 09:36:32.873287   52766 start.go:143] virtualization: kvm guest
	I1213 09:36:32.878612   52766 out.go:179] * [false-949855] minikube v1.37.0 on Ubuntu 22.04 (kvm/amd64)
	I1213 09:36:32.880401   52766 out.go:179]   - MINIKUBE_LOCATION=22128
	I1213 09:36:32.880398   52766 notify.go:221] Checking for updates...
	I1213 09:36:32.883214   52766 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1213 09:36:32.884677   52766 out.go:179]   - KUBECONFIG=/home/jenkins/minikube-integration/22128-9390/kubeconfig
	I1213 09:36:32.886183   52766 out.go:179]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/22128-9390/.minikube
	I1213 09:36:32.887659   52766 out.go:179]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I1213 09:36:32.889041   52766 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I1213 09:36:32.890830   52766 config.go:182] Loaded profile config "calico-949855": Driver=kvm2, ContainerRuntime=docker, KubernetesVersion=v1.34.2
	I1213 09:36:32.890953   52766 config.go:182] Loaded profile config "custom-flannel-949855": Driver=kvm2, ContainerRuntime=docker, KubernetesVersion=v1.34.2
	I1213 09:36:32.891052   52766 config.go:182] Loaded profile config "default-k8s-diff-port-018953": Driver=kvm2, ContainerRuntime=docker, KubernetesVersion=v1.34.2
	I1213 09:36:32.891152   52766 config.go:182] Loaded profile config "guest-566372": Driver=kvm2, ContainerRuntime=docker, KubernetesVersion=v0.0.0
	I1213 09:36:32.891268   52766 driver.go:422] Setting default libvirt URI to qemu:///system
	I1213 09:36:32.933267   52766 out.go:179] * Using the kvm2 driver based on user configuration
	I1213 09:36:32.934640   52766 start.go:309] selected driver: kvm2
	I1213 09:36:32.934662   52766 start.go:927] validating driver "kvm2" against <nil>
	I1213 09:36:32.934694   52766 start.go:938] status for kvm2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1213 09:36:32.935509   52766 start_flags.go:327] no existing cluster config was found, will generate one from the flags 
	I1213 09:36:32.935821   52766 start_flags.go:992] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1213 09:36:32.935848   52766 cni.go:84] Creating CNI manager for "false"
	I1213 09:36:32.935904   52766 start.go:353] cluster config:
	{Name:false-949855 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765275396-22083@sha256:ffa93f7bad1d2c0a7acfa6e97f1eec0e4955680d08c3904e49db297a10f7f89f Memory:3072 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.2 ClusterName:false-949855 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:doc
ker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:false} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.34.2 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GP
Us: AutoPauseInterval:1m0s}
	I1213 09:36:32.936002   52766 iso.go:125] acquiring lock: {Name:mka70bc7358d71723b0212976cce8aaa1cb0bc58 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1213 09:36:32.937757   52766 out.go:179] * Starting "false-949855" primary control-plane node in "false-949855" cluster
	I1213 09:36:32.939092   52766 preload.go:188] Checking if preload exists for k8s version v1.34.2 and runtime docker
	I1213 09:36:32.939139   52766 preload.go:203] Found local preload: /home/jenkins/minikube-integration/22128-9390/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.2-docker-overlay2-amd64.tar.lz4
	I1213 09:36:32.939148   52766 cache.go:65] Caching tarball of preloaded images
	I1213 09:36:32.939290   52766 preload.go:238] Found /home/jenkins/minikube-integration/22128-9390/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.2-docker-overlay2-amd64.tar.lz4 in cache, skipping download
	I1213 09:36:32.939307   52766 cache.go:68] Finished verifying existence of preloaded tar for v1.34.2 on docker
	I1213 09:36:32.939459   52766 profile.go:143] Saving config to /home/jenkins/minikube-integration/22128-9390/.minikube/profiles/false-949855/config.json ...
	I1213 09:36:32.939483   52766 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22128-9390/.minikube/profiles/false-949855/config.json: {Name:mk941445b61a355483a4c1bce31b72c56029b828 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1213 09:36:32.939677   52766 start.go:360] acquireMachinesLock for false-949855: {Name:mk5011dd8641588b44f3b8805193aca1c9f0973f Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I1213 09:36:32.939713   52766 start.go:364] duration metric: took 19.992µs to acquireMachinesLock for "false-949855"
	I1213 09:36:32.939739   52766 start.go:93] Provisioning new machine with config: &{Name:false-949855 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/22101/minikube-v1.37.0-1765481609-22101-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765275396-22083@sha256:ffa93f7bad1d2c0a7acfa6e97f1eec0e4955680d08c3904e49db297a10f7f89f Memory:3072 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kubern
etesVersion:v1.34.2 ClusterName:false-949855 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:false} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.34.2 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryM
irror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.34.2 ContainerRuntime:docker ControlPlane:true Worker:true}
	I1213 09:36:32.939823   52766 start.go:125] createHost starting for "" (driver="kvm2")
	I1213 09:36:29.649364   51535 cni.go:182] applying CNI manifest using /var/lib/minikube/binaries/v1.34.2/kubectl ...
	I1213 09:36:29.649451   51535 ssh_runner.go:195] Run: stat -c "%s %y" /var/tmp/minikube/cni.yaml
	I1213 09:36:29.660454   51535 ssh_runner.go:352] existence check for /var/tmp/minikube/cni.yaml: stat -c "%s %y" /var/tmp/minikube/cni.yaml: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/tmp/minikube/cni.yaml': No such file or directory
	I1213 09:36:29.660488   51535 ssh_runner.go:362] scp testdata/kube-flannel.yaml --> /var/tmp/minikube/cni.yaml (4578 bytes)
	I1213 09:36:29.708443   51535 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.2/kubectl apply --kubeconfig=/var/lib/minikube/kubeconfig -f /var/tmp/minikube/cni.yaml
	I1213 09:36:30.332618   51535 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I1213 09:36:30.332785   51535 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.2/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I1213 09:36:30.332834   51535 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.2/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes custom-flannel-949855 minikube.k8s.io/updated_at=2025_12_13T09_36_30_0700 minikube.k8s.io/version=v1.37.0 minikube.k8s.io/commit=fb16b7642350f383695d44d1e88d7327f6f14453 minikube.k8s.io/name=custom-flannel-949855 minikube.k8s.io/primary=true
	I1213 09:36:30.367507   51535 ops.go:34] apiserver oom_adj: -16
	I1213 09:36:30.660125   51535 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1213 09:36:31.160329   51535 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1213 09:36:31.661176   51535 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1213 09:36:32.160592   51535 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1213 09:36:32.660629   51535 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1213 09:36:33.160679   51535 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1213 09:36:33.660579   51535 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1213 09:36:34.160332   51535 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1213 09:36:34.660723   51535 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1213 09:36:34.872202   51535 kubeadm.go:1114] duration metric: took 4.539474041s to wait for elevateKubeSystemPrivileges
	I1213 09:36:34.872243   51535 kubeadm.go:403] duration metric: took 20.79753519s to StartCluster
	I1213 09:36:34.872265   51535 settings.go:142] acquiring lock: {Name:mk8102caadd7518d766b7222a696a7b7744bf016 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1213 09:36:34.872375   51535 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/22128-9390/kubeconfig
	I1213 09:36:34.874212   51535 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22128-9390/kubeconfig: {Name:mk2a9127c7f784c4f7a3155b56df24ca7e80b70b Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1213 09:36:34.874666   51535 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.2/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml"
	I1213 09:36:34.874688   51535 addons.go:527] enable addons start: toEnable=map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubetail:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I1213 09:36:34.874763   51535 addons.go:70] Setting storage-provisioner=true in profile "custom-flannel-949855"
	I1213 09:36:34.874664   51535 start.go:236] Will wait 15m0s for node &{Name: IP:192.168.83.251 Port:8443 KubernetesVersion:v1.34.2 ContainerRuntime:docker ControlPlane:true Worker:true}
	I1213 09:36:34.874782   51535 addons.go:70] Setting default-storageclass=true in profile "custom-flannel-949855"
	I1213 09:36:34.874805   51535 addons_storage_classes.go:34] enableOrDisableStorageClasses default-storageclass=true on "custom-flannel-949855"
	I1213 09:36:34.874782   51535 addons.go:239] Setting addon storage-provisioner=true in "custom-flannel-949855"
	I1213 09:36:34.874950   51535 host.go:66] Checking if "custom-flannel-949855" exists ...
	I1213 09:36:34.874958   51535 config.go:182] Loaded profile config "custom-flannel-949855": Driver=kvm2, ContainerRuntime=docker, KubernetesVersion=v1.34.2
	I1213 09:36:34.875095   51535 cache.go:107] acquiring lock: {Name:mk42b4b4a968c0b780e9e698938a03f292a350d6 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1213 09:36:34.875213   51535 cache.go:115] /home/jenkins/minikube-integration/22128-9390/.minikube/cache/images/amd64/gcr.io/k8s-minikube/gvisor-addon_2 exists
	I1213 09:36:34.875247   51535 cache.go:96] cache image "gcr.io/k8s-minikube/gvisor-addon:2" -> "/home/jenkins/minikube-integration/22128-9390/.minikube/cache/images/amd64/gcr.io/k8s-minikube/gvisor-addon_2" took 170.734µs
	I1213 09:36:34.875257   51535 cache.go:80] save to tar file gcr.io/k8s-minikube/gvisor-addon:2 -> /home/jenkins/minikube-integration/22128-9390/.minikube/cache/images/amd64/gcr.io/k8s-minikube/gvisor-addon_2 succeeded
	I1213 09:36:34.875271   51535 cache.go:87] Successfully saved all images to host disk.
	I1213 09:36:34.875508   51535 config.go:182] Loaded profile config "custom-flannel-949855": Driver=kvm2, ContainerRuntime=docker, KubernetesVersion=v1.34.2
	I1213 09:36:34.879004   51535 out.go:179] * Verifying Kubernetes components...
	I1213 09:36:34.879972   51535 ssh_runner.go:195] Run: docker images --format {{.Repository}}:{{.Tag}}
	I1213 09:36:34.880404   51535 out.go:179]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I1213 09:36:32.016317   50577 system_pods.go:86] 9 kube-system pods found
	I1213 09:36:32.016402   50577 system_pods.go:89] "calico-kube-controllers-5c676f698c-mn5lt" [d090b034-f709-45a5-ae36-0df9d318f18f] Pending / Ready:ContainersNotReady (containers with unready status: [calico-kube-controllers]) / ContainersReady:ContainersNotReady (containers with unready status: [calico-kube-controllers])
	I1213 09:36:32.016420   50577 system_pods.go:89] "calico-node-ktf92" [3135da63-b9e4-4382-9209-e980d38f7768] Pending / Initialized:ContainersNotInitialized (containers with incomplete status: [ebpf-bootstrap]) / Ready:ContainersNotReady (containers with unready status: [calico-node]) / ContainersReady:ContainersNotReady (containers with unready status: [calico-node])
	I1213 09:36:32.016435   50577 system_pods.go:89] "coredns-66bc5c9577-g9rr9" [8710f185-92d4-4694-bcbf-258da3c3aee3] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1213 09:36:32.016445   50577 system_pods.go:89] "etcd-calico-949855" [cb0247f0-9e7a-4d22-a4e6-db35a7b199ee] Running
	I1213 09:36:32.016452   50577 system_pods.go:89] "kube-apiserver-calico-949855" [43f43361-238a-4edf-acf1-b7563438a8a5] Running
	I1213 09:36:32.016463   50577 system_pods.go:89] "kube-controller-manager-calico-949855" [4a0a3c98-de85-4e66-b0df-c7c124ce9885] Running
	I1213 09:36:32.016475   50577 system_pods.go:89] "kube-proxy-vdgk9" [39b7aee3-6998-48f4-af6b-b629c648731f] Running
	I1213 09:36:32.016481   50577 system_pods.go:89] "kube-scheduler-calico-949855" [5ab00101-de5c-4ff3-a924-7afdd523a0fb] Running
	I1213 09:36:32.016493   50577 system_pods.go:89] "storage-provisioner" [95f85e24-f720-4da8-b6c0-3a3fe90edf56] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I1213 09:36:32.016516   50577 retry.go:31] will retry after 490.807491ms: missing components: kube-dns
	I1213 09:36:32.513558   50577 system_pods.go:86] 9 kube-system pods found
	I1213 09:36:32.513593   50577 system_pods.go:89] "calico-kube-controllers-5c676f698c-mn5lt" [d090b034-f709-45a5-ae36-0df9d318f18f] Pending / Ready:ContainersNotReady (containers with unready status: [calico-kube-controllers]) / ContainersReady:ContainersNotReady (containers with unready status: [calico-kube-controllers])
	I1213 09:36:32.513602   50577 system_pods.go:89] "calico-node-ktf92" [3135da63-b9e4-4382-9209-e980d38f7768] Pending / Initialized:ContainersNotInitialized (containers with incomplete status: [ebpf-bootstrap]) / Ready:ContainersNotReady (containers with unready status: [calico-node]) / ContainersReady:ContainersNotReady (containers with unready status: [calico-node])
	I1213 09:36:32.513609   50577 system_pods.go:89] "coredns-66bc5c9577-g9rr9" [8710f185-92d4-4694-bcbf-258da3c3aee3] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1213 09:36:32.513615   50577 system_pods.go:89] "etcd-calico-949855" [cb0247f0-9e7a-4d22-a4e6-db35a7b199ee] Running
	I1213 09:36:32.513620   50577 system_pods.go:89] "kube-apiserver-calico-949855" [43f43361-238a-4edf-acf1-b7563438a8a5] Running
	I1213 09:36:32.513624   50577 system_pods.go:89] "kube-controller-manager-calico-949855" [4a0a3c98-de85-4e66-b0df-c7c124ce9885] Running
	I1213 09:36:32.513629   50577 system_pods.go:89] "kube-proxy-vdgk9" [39b7aee3-6998-48f4-af6b-b629c648731f] Running
	I1213 09:36:32.513634   50577 system_pods.go:89] "kube-scheduler-calico-949855" [5ab00101-de5c-4ff3-a924-7afdd523a0fb] Running
	I1213 09:36:32.513641   50577 system_pods.go:89] "storage-provisioner" [95f85e24-f720-4da8-b6c0-3a3fe90edf56] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I1213 09:36:32.513658   50577 retry.go:31] will retry after 797.137585ms: missing components: kube-dns
	I1213 09:36:33.321222   50577 system_pods.go:86] 9 kube-system pods found
	I1213 09:36:33.321267   50577 system_pods.go:89] "calico-kube-controllers-5c676f698c-mn5lt" [d090b034-f709-45a5-ae36-0df9d318f18f] Pending / Ready:ContainersNotReady (containers with unready status: [calico-kube-controllers]) / ContainersReady:ContainersNotReady (containers with unready status: [calico-kube-controllers])
	I1213 09:36:33.321279   50577 system_pods.go:89] "calico-node-ktf92" [3135da63-b9e4-4382-9209-e980d38f7768] Pending / Initialized:ContainersNotInitialized (containers with incomplete status: [ebpf-bootstrap]) / Ready:ContainersNotReady (containers with unready status: [calico-node]) / ContainersReady:ContainersNotReady (containers with unready status: [calico-node])
	I1213 09:36:33.321290   50577 system_pods.go:89] "coredns-66bc5c9577-g9rr9" [8710f185-92d4-4694-bcbf-258da3c3aee3] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1213 09:36:33.321297   50577 system_pods.go:89] "etcd-calico-949855" [cb0247f0-9e7a-4d22-a4e6-db35a7b199ee] Running
	I1213 09:36:33.321305   50577 system_pods.go:89] "kube-apiserver-calico-949855" [43f43361-238a-4edf-acf1-b7563438a8a5] Running
	I1213 09:36:33.321310   50577 system_pods.go:89] "kube-controller-manager-calico-949855" [4a0a3c98-de85-4e66-b0df-c7c124ce9885] Running
	I1213 09:36:33.321316   50577 system_pods.go:89] "kube-proxy-vdgk9" [39b7aee3-6998-48f4-af6b-b629c648731f] Running
	I1213 09:36:33.321321   50577 system_pods.go:89] "kube-scheduler-calico-949855" [5ab00101-de5c-4ff3-a924-7afdd523a0fb] Running
	I1213 09:36:33.321326   50577 system_pods.go:89] "storage-provisioner" [95f85e24-f720-4da8-b6c0-3a3fe90edf56] Running
	I1213 09:36:33.321380   50577 retry.go:31] will retry after 779.818137ms: missing components: kube-dns
	I1213 09:36:34.114613   50577 system_pods.go:86] 9 kube-system pods found
	I1213 09:36:34.114650   50577 system_pods.go:89] "calico-kube-controllers-5c676f698c-mn5lt" [d090b034-f709-45a5-ae36-0df9d318f18f] Pending / Ready:ContainersNotReady (containers with unready status: [calico-kube-controllers]) / ContainersReady:ContainersNotReady (containers with unready status: [calico-kube-controllers])
	I1213 09:36:34.114663   50577 system_pods.go:89] "calico-node-ktf92" [3135da63-b9e4-4382-9209-e980d38f7768] Pending / Initialized:ContainersNotInitialized (containers with incomplete status: [ebpf-bootstrap]) / Ready:ContainersNotReady (containers with unready status: [calico-node]) / ContainersReady:ContainersNotReady (containers with unready status: [calico-node])
	I1213 09:36:34.114671   50577 system_pods.go:89] "coredns-66bc5c9577-g9rr9" [8710f185-92d4-4694-bcbf-258da3c3aee3] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1213 09:36:34.114677   50577 system_pods.go:89] "etcd-calico-949855" [cb0247f0-9e7a-4d22-a4e6-db35a7b199ee] Running
	I1213 09:36:34.114683   50577 system_pods.go:89] "kube-apiserver-calico-949855" [43f43361-238a-4edf-acf1-b7563438a8a5] Running
	I1213 09:36:34.114688   50577 system_pods.go:89] "kube-controller-manager-calico-949855" [4a0a3c98-de85-4e66-b0df-c7c124ce9885] Running
	I1213 09:36:34.114694   50577 system_pods.go:89] "kube-proxy-vdgk9" [39b7aee3-6998-48f4-af6b-b629c648731f] Running
	I1213 09:36:34.114699   50577 system_pods.go:89] "kube-scheduler-calico-949855" [5ab00101-de5c-4ff3-a924-7afdd523a0fb] Running
	I1213 09:36:34.114705   50577 system_pods.go:89] "storage-provisioner" [95f85e24-f720-4da8-b6c0-3a3fe90edf56] Running
	I1213 09:36:34.114723   50577 retry.go:31] will retry after 1.055249542s: missing components: kube-dns
	I1213 09:36:35.180653   50577 system_pods.go:86] 9 kube-system pods found
	I1213 09:36:35.180703   50577 system_pods.go:89] "calico-kube-controllers-5c676f698c-mn5lt" [d090b034-f709-45a5-ae36-0df9d318f18f] Pending / Ready:ContainersNotReady (containers with unready status: [calico-kube-controllers]) / ContainersReady:ContainersNotReady (containers with unready status: [calico-kube-controllers])
	I1213 09:36:35.180719   50577 system_pods.go:89] "calico-node-ktf92" [3135da63-b9e4-4382-9209-e980d38f7768] Pending / Initialized:ContainersNotInitialized (containers with incomplete status: [ebpf-bootstrap]) / Ready:ContainersNotReady (containers with unready status: [calico-node]) / ContainersReady:ContainersNotReady (containers with unready status: [calico-node])
	I1213 09:36:35.180729   50577 system_pods.go:89] "coredns-66bc5c9577-g9rr9" [8710f185-92d4-4694-bcbf-258da3c3aee3] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1213 09:36:35.180735   50577 system_pods.go:89] "etcd-calico-949855" [cb0247f0-9e7a-4d22-a4e6-db35a7b199ee] Running
	I1213 09:36:35.180743   50577 system_pods.go:89] "kube-apiserver-calico-949855" [43f43361-238a-4edf-acf1-b7563438a8a5] Running
	I1213 09:36:35.180752   50577 system_pods.go:89] "kube-controller-manager-calico-949855" [4a0a3c98-de85-4e66-b0df-c7c124ce9885] Running
	I1213 09:36:35.180766   50577 system_pods.go:89] "kube-proxy-vdgk9" [39b7aee3-6998-48f4-af6b-b629c648731f] Running
	I1213 09:36:35.180771   50577 system_pods.go:89] "kube-scheduler-calico-949855" [5ab00101-de5c-4ff3-a924-7afdd523a0fb] Running
	I1213 09:36:35.180776   50577 system_pods.go:89] "storage-provisioner" [95f85e24-f720-4da8-b6c0-3a3fe90edf56] Running
	I1213 09:36:35.180795   50577 retry.go:31] will retry after 1.139802374s: missing components: kube-dns
	I1213 09:36:36.331485   50577 system_pods.go:86] 9 kube-system pods found
	I1213 09:36:36.331533   50577 system_pods.go:89] "calico-kube-controllers-5c676f698c-mn5lt" [d090b034-f709-45a5-ae36-0df9d318f18f] Pending / Ready:ContainersNotReady (containers with unready status: [calico-kube-controllers]) / ContainersReady:ContainersNotReady (containers with unready status: [calico-kube-controllers])
	I1213 09:36:36.331548   50577 system_pods.go:89] "calico-node-ktf92" [3135da63-b9e4-4382-9209-e980d38f7768] Pending / Initialized:ContainersNotInitialized (containers with incomplete status: [ebpf-bootstrap]) / Ready:ContainersNotReady (containers with unready status: [calico-node]) / ContainersReady:ContainersNotReady (containers with unready status: [calico-node])
	I1213 09:36:36.331560   50577 system_pods.go:89] "coredns-66bc5c9577-g9rr9" [8710f185-92d4-4694-bcbf-258da3c3aee3] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1213 09:36:36.331567   50577 system_pods.go:89] "etcd-calico-949855" [cb0247f0-9e7a-4d22-a4e6-db35a7b199ee] Running
	I1213 09:36:36.331574   50577 system_pods.go:89] "kube-apiserver-calico-949855" [43f43361-238a-4edf-acf1-b7563438a8a5] Running
	I1213 09:36:36.331583   50577 system_pods.go:89] "kube-controller-manager-calico-949855" [4a0a3c98-de85-4e66-b0df-c7c124ce9885] Running
	I1213 09:36:36.331596   50577 system_pods.go:89] "kube-proxy-vdgk9" [39b7aee3-6998-48f4-af6b-b629c648731f] Running
	I1213 09:36:36.331603   50577 system_pods.go:89] "kube-scheduler-calico-949855" [5ab00101-de5c-4ff3-a924-7afdd523a0fb] Running
	I1213 09:36:36.331608   50577 system_pods.go:89] "storage-provisioner" [95f85e24-f720-4da8-b6c0-3a3fe90edf56] Running
	I1213 09:36:36.331629   50577 retry.go:31] will retry after 1.874316657s: missing components: kube-dns
	I1213 09:36:34.880563   51535 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1213 09:36:34.880921   51535 addons.go:239] Setting addon default-storageclass=true in "custom-flannel-949855"
	I1213 09:36:34.880971   51535 host.go:66] Checking if "custom-flannel-949855" exists ...
	I1213 09:36:34.881855   51535 addons.go:436] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I1213 09:36:34.881874   51535 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I1213 09:36:34.883942   51535 addons.go:436] installing /etc/kubernetes/addons/storageclass.yaml
	I1213 09:36:34.883967   51535 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I1213 09:36:34.884896   51535 main.go:143] libmachine: domain custom-flannel-949855 has defined MAC address 52:54:00:9e:e4:79 in network mk-custom-flannel-949855
	I1213 09:36:34.885677   51535 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:9e:e4:79", ip: ""} in network mk-custom-flannel-949855: {Iface:virbr5 ExpiryTime:2025-12-13 10:35:59 +0000 UTC Type:0 Mac:52:54:00:9e:e4:79 Iaid: IPaddr:192.168.83.251 Prefix:24 Hostname:custom-flannel-949855 Clientid:01:52:54:00:9e:e4:79}
	I1213 09:36:34.885719   51535 main.go:143] libmachine: domain custom-flannel-949855 has defined IP address 192.168.83.251 and MAC address 52:54:00:9e:e4:79 in network mk-custom-flannel-949855
	I1213 09:36:34.886070   51535 sshutil.go:53] new ssh client: &{IP:192.168.83.251 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/22128-9390/.minikube/machines/custom-flannel-949855/id_rsa Username:docker}
	I1213 09:36:34.887037   51535 main.go:143] libmachine: domain custom-flannel-949855 has defined MAC address 52:54:00:9e:e4:79 in network mk-custom-flannel-949855
	I1213 09:36:34.887971   51535 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:9e:e4:79", ip: ""} in network mk-custom-flannel-949855: {Iface:virbr5 ExpiryTime:2025-12-13 10:35:59 +0000 UTC Type:0 Mac:52:54:00:9e:e4:79 Iaid: IPaddr:192.168.83.251 Prefix:24 Hostname:custom-flannel-949855 Clientid:01:52:54:00:9e:e4:79}
	I1213 09:36:34.888010   51535 main.go:143] libmachine: domain custom-flannel-949855 has defined IP address 192.168.83.251 and MAC address 52:54:00:9e:e4:79 in network mk-custom-flannel-949855
	I1213 09:36:34.888450   51535 sshutil.go:53] new ssh client: &{IP:192.168.83.251 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/22128-9390/.minikube/machines/custom-flannel-949855/id_rsa Username:docker}
	I1213 09:36:34.889609   51535 main.go:143] libmachine: domain custom-flannel-949855 has defined MAC address 52:54:00:9e:e4:79 in network mk-custom-flannel-949855
	I1213 09:36:34.890165   51535 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:9e:e4:79", ip: ""} in network mk-custom-flannel-949855: {Iface:virbr5 ExpiryTime:2025-12-13 10:35:59 +0000 UTC Type:0 Mac:52:54:00:9e:e4:79 Iaid: IPaddr:192.168.83.251 Prefix:24 Hostname:custom-flannel-949855 Clientid:01:52:54:00:9e:e4:79}
	I1213 09:36:34.890195   51535 main.go:143] libmachine: domain custom-flannel-949855 has defined IP address 192.168.83.251 and MAC address 52:54:00:9e:e4:79 in network mk-custom-flannel-949855
	I1213 09:36:34.890401   51535 sshutil.go:53] new ssh client: &{IP:192.168.83.251 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/22128-9390/.minikube/machines/custom-flannel-949855/id_rsa Username:docker}
	I1213 09:36:35.591006   51535 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.2/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I1213 09:36:35.718834   51535 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.2/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1213 09:36:35.916022   51535 ssh_runner.go:235] Completed: sudo systemctl daemon-reload: (1.035409951s)
	I1213 09:36:35.916108   51535 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1213 09:36:35.916124   51535 ssh_runner.go:235] Completed: docker images --format {{.Repository}}:{{.Tag}}: (1.036121711s)
	I1213 09:36:35.916161   51535 docker.go:691] Got preloaded images: -- stdout --
	registry.k8s.io/kube-apiserver:v1.34.2
	registry.k8s.io/kube-controller-manager:v1.34.2
	registry.k8s.io/kube-scheduler:v1.34.2
	registry.k8s.io/kube-proxy:v1.34.2
	registry.k8s.io/etcd:3.6.5-0
	registry.k8s.io/pause:3.10.1
	registry.k8s.io/coredns/coredns:v1.12.1
	gcr.io/k8s-minikube/storage-provisioner:v5
	
	-- /stdout --
	I1213 09:36:35.916037   51535 ssh_runner.go:235] Completed: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.2/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml": (1.041317057s)
	I1213 09:36:35.916171   51535 docker.go:697] gcr.io/k8s-minikube/gvisor-addon:2 wasn't preloaded
	I1213 09:36:35.916179   51535 cache_images.go:90] LoadCachedImages start: [gcr.io/k8s-minikube/gvisor-addon:2]
	I1213 09:36:35.916367   51535 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.2/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.83.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.34.2/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -"
	I1213 09:36:35.918742   51535 ssh_runner.go:195] Run: docker image inspect --format {{.Id}} gcr.io/k8s-minikube/gvisor-addon:2
	I1213 09:36:37.123565   51535 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.2/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: (1.404682689s)
	I1213 09:36:37.123670   51535 ssh_runner.go:235] Completed: sudo systemctl start kubelet: (1.207544979s)
	I1213 09:36:37.123929   51535 ssh_runner.go:235] Completed: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.2/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.83.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.34.2/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -": (1.20753791s)
	I1213 09:36:37.123954   51535 start.go:977] {"host.minikube.internal": 192.168.83.1} host record injected into CoreDNS's ConfigMap
	I1213 09:36:37.124039   51535 ssh_runner.go:235] Completed: docker image inspect --format {{.Id}} gcr.io/k8s-minikube/gvisor-addon:2: (1.205256516s)
	I1213 09:36:37.124081   51535 cache_images.go:118] "gcr.io/k8s-minikube/gvisor-addon:2" needs transfer: "gcr.io/k8s-minikube/gvisor-addon:2" does not exist at hash "sha256:3b59a93df63497f2242eafd7e18fd26aff9ead0899361b8a5f7ac2e648ba898e" in container runtime
	I1213 09:36:37.124115   51535 docker.go:338] Removing image: gcr.io/k8s-minikube/gvisor-addon:2
	I1213 09:36:37.124159   51535 ssh_runner.go:195] Run: docker rmi gcr.io/k8s-minikube/gvisor-addon:2
	I1213 09:36:37.124997   51535 node_ready.go:35] waiting up to 15m0s for node "custom-flannel-949855" to be "Ready" ...
	I1213 09:36:37.125726   51535 out.go:179] * Enabled addons: default-storageclass, storage-provisioner
	I1213 09:36:32.942036   52766 out.go:252] * Creating kvm2 VM (CPUs=2, Memory=3072MB, Disk=20000MB) ...
	I1213 09:36:32.942240   52766 start.go:159] libmachine.API.Create for "false-949855" (driver="kvm2")
	I1213 09:36:32.942269   52766 client.go:173] LocalClient.Create starting
	I1213 09:36:32.942387   52766 main.go:143] libmachine: Reading certificate data from /home/jenkins/minikube-integration/22128-9390/.minikube/certs/ca.pem
	I1213 09:36:32.942441   52766 main.go:143] libmachine: Decoding PEM data...
	I1213 09:36:32.942470   52766 main.go:143] libmachine: Parsing certificate...
	I1213 09:36:32.942534   52766 main.go:143] libmachine: Reading certificate data from /home/jenkins/minikube-integration/22128-9390/.minikube/certs/cert.pem
	I1213 09:36:32.942573   52766 main.go:143] libmachine: Decoding PEM data...
	I1213 09:36:32.942591   52766 main.go:143] libmachine: Parsing certificate...
	I1213 09:36:32.942975   52766 main.go:143] libmachine: creating domain...
	I1213 09:36:32.942987   52766 main.go:143] libmachine: creating network...
	I1213 09:36:32.945746   52766 main.go:143] libmachine: found existing default network
	I1213 09:36:32.945931   52766 main.go:143] libmachine: <network connections='4'>
	  <name>default</name>
	  <uuid>c61344c2-dba2-46dd-a21a-34776d235985</uuid>
	  <forward mode='nat'>
	    <nat>
	      <port start='1024' end='65535'/>
	    </nat>
	  </forward>
	  <bridge name='virbr0' stp='on' delay='0'/>
	  <mac address='52:54:00:10:a2:1d'/>
	  <ip address='192.168.122.1' netmask='255.255.255.0'>
	    <dhcp>
	      <range start='192.168.122.2' end='192.168.122.254'/>
	    </dhcp>
	  </ip>
	</network>
	
	I1213 09:36:32.946996   52766 network.go:211] skipping subnet 192.168.39.0/24 that is taken: &{IP:192.168.39.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.39.0/24 Gateway:192.168.39.1 ClientMin:192.168.39.2 ClientMax:192.168.39.254 Broadcast:192.168.39.255 IsPrivate:true Interface:{IfaceName:virbr1 IfaceIPv4:192.168.39.1 IfaceMTU:1500 IfaceMAC:52:54:00:92:0a:2a} reservation:<nil>}
	I1213 09:36:32.947782   52766 network.go:211] skipping subnet 192.168.50.0/24 that is taken: &{IP:192.168.50.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.50.0/24 Gateway:192.168.50.1 ClientMin:192.168.50.2 ClientMax:192.168.50.254 Broadcast:192.168.50.255 IsPrivate:true Interface:{IfaceName:virbr2 IfaceIPv4:192.168.50.1 IfaceMTU:1500 IfaceMAC:52:54:00:3f:3d:2c} reservation:<nil>}
	I1213 09:36:32.948796   52766 network.go:206] using free private subnet 192.168.61.0/24: &{IP:192.168.61.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.61.0/24 Gateway:192.168.61.1 ClientMin:192.168.61.2 ClientMax:192.168.61.254 Broadcast:192.168.61.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:0xc001ebc2e0}
	I1213 09:36:32.948903   52766 main.go:143] libmachine: defining private network:
	
	<network>
	  <name>mk-false-949855</name>
	  <dns enable='no'/>
	  <ip address='192.168.61.1' netmask='255.255.255.0'>
	    <dhcp>
	      <range start='192.168.61.2' end='192.168.61.253'/>
	    </dhcp>
	  </ip>
	</network>
	
	I1213 09:36:32.955737   52766 main.go:143] libmachine: creating private network mk-false-949855 192.168.61.0/24...
	I1213 09:36:33.054561   52766 main.go:143] libmachine: private network mk-false-949855 192.168.61.0/24 created
	I1213 09:36:33.055066   52766 main.go:143] libmachine: <network>
	  <name>mk-false-949855</name>
	  <uuid>1d3be71d-800e-4eda-816b-b80fe9958d0b</uuid>
	  <bridge name='virbr3' stp='on' delay='0'/>
	  <mac address='52:54:00:75:81:cc'/>
	  <dns enable='no'/>
	  <ip address='192.168.61.1' netmask='255.255.255.0'>
	    <dhcp>
	      <range start='192.168.61.2' end='192.168.61.253'/>
	    </dhcp>
	  </ip>
	</network>
	
	I1213 09:36:33.055111   52766 main.go:143] libmachine: setting up store path in /home/jenkins/minikube-integration/22128-9390/.minikube/machines/false-949855 ...
	I1213 09:36:33.055141   52766 main.go:143] libmachine: building disk image from file:///home/jenkins/minikube-integration/22128-9390/.minikube/cache/iso/amd64/minikube-v1.37.0-1765481609-22101-amd64.iso
	I1213 09:36:33.055154   52766 common.go:152] Making disk image using store path: /home/jenkins/minikube-integration/22128-9390/.minikube
	I1213 09:36:33.055224   52766 main.go:143] libmachine: Downloading /home/jenkins/minikube-integration/22128-9390/.minikube/cache/boot2docker.iso from file:///home/jenkins/minikube-integration/22128-9390/.minikube/cache/iso/amd64/minikube-v1.37.0-1765481609-22101-amd64.iso...
	I1213 09:36:33.363325   52766 common.go:159] Creating ssh key: /home/jenkins/minikube-integration/22128-9390/.minikube/machines/false-949855/id_rsa...
	I1213 09:36:33.455792   52766 common.go:165] Creating raw disk image: /home/jenkins/minikube-integration/22128-9390/.minikube/machines/false-949855/false-949855.rawdisk...
	I1213 09:36:33.455864   52766 main.go:143] libmachine: Writing magic tar header
	I1213 09:36:33.455889   52766 main.go:143] libmachine: Writing SSH key tar header
	I1213 09:36:33.455970   52766 common.go:179] Fixing permissions on /home/jenkins/minikube-integration/22128-9390/.minikube/machines/false-949855 ...
	I1213 09:36:33.456031   52766 main.go:143] libmachine: checking permissions on dir: /home/jenkins/minikube-integration/22128-9390/.minikube/machines/false-949855
	I1213 09:36:33.456053   52766 main.go:143] libmachine: setting executable bit set on /home/jenkins/minikube-integration/22128-9390/.minikube/machines/false-949855 (perms=drwx------)
	I1213 09:36:33.456068   52766 main.go:143] libmachine: checking permissions on dir: /home/jenkins/minikube-integration/22128-9390/.minikube/machines
	I1213 09:36:33.456078   52766 main.go:143] libmachine: setting executable bit set on /home/jenkins/minikube-integration/22128-9390/.minikube/machines (perms=drwxr-xr-x)
	I1213 09:36:33.456088   52766 main.go:143] libmachine: checking permissions on dir: /home/jenkins/minikube-integration/22128-9390/.minikube
	I1213 09:36:33.456097   52766 main.go:143] libmachine: setting executable bit set on /home/jenkins/minikube-integration/22128-9390/.minikube (perms=drwxr-xr-x)
	I1213 09:36:33.456104   52766 main.go:143] libmachine: checking permissions on dir: /home/jenkins/minikube-integration/22128-9390
	I1213 09:36:33.456113   52766 main.go:143] libmachine: setting executable bit set on /home/jenkins/minikube-integration/22128-9390 (perms=drwxrwxr-x)
	I1213 09:36:33.456122   52766 main.go:143] libmachine: checking permissions on dir: /home/jenkins/minikube-integration
	I1213 09:36:33.456132   52766 main.go:143] libmachine: setting executable bit set on /home/jenkins/minikube-integration (perms=drwxrwxr-x)
	I1213 09:36:33.456140   52766 main.go:143] libmachine: checking permissions on dir: /home/jenkins
	I1213 09:36:33.456147   52766 main.go:143] libmachine: setting executable bit set on /home/jenkins (perms=drwxr-xr-x)
	I1213 09:36:33.456155   52766 main.go:143] libmachine: checking permissions on dir: /home
	I1213 09:36:33.456162   52766 main.go:143] libmachine: skipping /home - not owner
	I1213 09:36:33.456165   52766 main.go:143] libmachine: defining domain...
	I1213 09:36:33.457580   52766 main.go:143] libmachine: defining domain using XML: 
	<domain type='kvm'>
	  <name>false-949855</name>
	  <memory unit='MiB'>3072</memory>
	  <vcpu>2</vcpu>
	  <features>
	    <acpi/>
	    <apic/>
	    <pae/>
	  </features>
	  <cpu mode='host-passthrough'>
	  </cpu>
	  <os>
	    <type>hvm</type>
	    <boot dev='cdrom'/>
	    <boot dev='hd'/>
	    <bootmenu enable='no'/>
	  </os>
	  <devices>
	    <disk type='file' device='cdrom'>
	      <source file='/home/jenkins/minikube-integration/22128-9390/.minikube/machines/false-949855/boot2docker.iso'/>
	      <target dev='hdc' bus='scsi'/>
	      <readonly/>
	    </disk>
	    <disk type='file' device='disk'>
	      <driver name='qemu' type='raw' cache='default' io='threads' />
	      <source file='/home/jenkins/minikube-integration/22128-9390/.minikube/machines/false-949855/false-949855.rawdisk'/>
	      <target dev='hda' bus='virtio'/>
	    </disk>
	    <interface type='network'>
	      <source network='mk-false-949855'/>
	      <model type='virtio'/>
	    </interface>
	    <interface type='network'>
	      <source network='default'/>
	      <model type='virtio'/>
	    </interface>
	    <serial type='pty'>
	      <target port='0'/>
	    </serial>
	    <console type='pty'>
	      <target type='serial' port='0'/>
	    </console>
	    <rng model='virtio'>
	      <backend model='random'>/dev/random</backend>
	    </rng>
	  </devices>
	</domain>
	
	I1213 09:36:33.466815   52766 main.go:143] libmachine: domain false-949855 has defined MAC address 52:54:00:84:b8:af in network default
	I1213 09:36:33.467785   52766 main.go:143] libmachine: domain false-949855 has defined MAC address 52:54:00:f2:18:fc in network mk-false-949855
	I1213 09:36:33.467808   52766 main.go:143] libmachine: starting domain...
	I1213 09:36:33.467812   52766 main.go:143] libmachine: ensuring networks are active...
	I1213 09:36:33.468839   52766 main.go:143] libmachine: Ensuring network default is active
	I1213 09:36:33.469438   52766 main.go:143] libmachine: Ensuring network mk-false-949855 is active
	I1213 09:36:33.470524   52766 main.go:143] libmachine: getting domain XML...
	I1213 09:36:33.472111   52766 main.go:143] libmachine: starting domain XML:
	<domain type='kvm'>
	  <name>false-949855</name>
	  <uuid>ab7f1c5a-e30f-44dd-a282-41f48a9acd2a</uuid>
	  <memory unit='KiB'>3145728</memory>
	  <currentMemory unit='KiB'>3145728</currentMemory>
	  <vcpu placement='static'>2</vcpu>
	  <os>
	    <type arch='x86_64' machine='pc-i440fx-jammy'>hvm</type>
	    <boot dev='cdrom'/>
	    <boot dev='hd'/>
	    <bootmenu enable='no'/>
	  </os>
	  <features>
	    <acpi/>
	    <apic/>
	    <pae/>
	  </features>
	  <cpu mode='host-passthrough' check='none' migratable='on'/>
	  <clock offset='utc'/>
	  <on_poweroff>destroy</on_poweroff>
	  <on_reboot>restart</on_reboot>
	  <on_crash>destroy</on_crash>
	  <devices>
	    <emulator>/usr/bin/qemu-system-x86_64</emulator>
	    <disk type='file' device='cdrom'>
	      <driver name='qemu' type='raw'/>
	      <source file='/home/jenkins/minikube-integration/22128-9390/.minikube/machines/false-949855/boot2docker.iso'/>
	      <target dev='hdc' bus='scsi'/>
	      <readonly/>
	      <address type='drive' controller='0' bus='0' target='0' unit='2'/>
	    </disk>
	    <disk type='file' device='disk'>
	      <driver name='qemu' type='raw' io='threads'/>
	      <source file='/home/jenkins/minikube-integration/22128-9390/.minikube/machines/false-949855/false-949855.rawdisk'/>
	      <target dev='hda' bus='virtio'/>
	      <address type='pci' domain='0x0000' bus='0x00' slot='0x05' function='0x0'/>
	    </disk>
	    <controller type='usb' index='0' model='piix3-uhci'>
	      <address type='pci' domain='0x0000' bus='0x00' slot='0x01' function='0x2'/>
	    </controller>
	    <controller type='pci' index='0' model='pci-root'/>
	    <controller type='scsi' index='0' model='lsilogic'>
	      <address type='pci' domain='0x0000' bus='0x00' slot='0x04' function='0x0'/>
	    </controller>
	    <interface type='network'>
	      <mac address='52:54:00:f2:18:fc'/>
	      <source network='mk-false-949855'/>
	      <model type='virtio'/>
	      <address type='pci' domain='0x0000' bus='0x00' slot='0x02' function='0x0'/>
	    </interface>
	    <interface type='network'>
	      <mac address='52:54:00:84:b8:af'/>
	      <source network='default'/>
	      <model type='virtio'/>
	      <address type='pci' domain='0x0000' bus='0x00' slot='0x03' function='0x0'/>
	    </interface>
	    <serial type='pty'>
	      <target type='isa-serial' port='0'>
	        <model name='isa-serial'/>
	      </target>
	    </serial>
	    <console type='pty'>
	      <target type='serial' port='0'/>
	    </console>
	    <input type='mouse' bus='ps2'/>
	    <input type='keyboard' bus='ps2'/>
	    <audio id='1' type='none'/>
	    <memballoon model='virtio'>
	      <address type='pci' domain='0x0000' bus='0x00' slot='0x06' function='0x0'/>
	    </memballoon>
	    <rng model='virtio'>
	      <backend model='random'>/dev/random</backend>
	      <address type='pci' domain='0x0000' bus='0x00' slot='0x07' function='0x0'/>
	    </rng>
	  </devices>
	</domain>
	
	I1213 09:36:35.139253   52766 main.go:143] libmachine: waiting for domain to start...
	I1213 09:36:35.140924   52766 main.go:143] libmachine: domain is now running
	I1213 09:36:35.140948   52766 main.go:143] libmachine: waiting for IP...
	I1213 09:36:35.142097   52766 main.go:143] libmachine: domain false-949855 has defined MAC address 52:54:00:f2:18:fc in network mk-false-949855
	I1213 09:36:35.142903   52766 main.go:143] libmachine: no network interface addresses found for domain false-949855 (source=lease)
	I1213 09:36:35.142937   52766 main.go:143] libmachine: trying to list again with source=arp
	I1213 09:36:35.143406   52766 main.go:143] libmachine: unable to find current IP address of domain false-949855 in network mk-false-949855 (interfaces detected: [])
	I1213 09:36:35.143479   52766 retry.go:31] will retry after 255.519964ms: waiting for domain to come up
	I1213 09:36:35.401410   52766 main.go:143] libmachine: domain false-949855 has defined MAC address 52:54:00:f2:18:fc in network mk-false-949855
	I1213 09:36:35.402406   52766 main.go:143] libmachine: no network interface addresses found for domain false-949855 (source=lease)
	I1213 09:36:35.402429   52766 main.go:143] libmachine: trying to list again with source=arp
	I1213 09:36:35.402949   52766 main.go:143] libmachine: unable to find current IP address of domain false-949855 in network mk-false-949855 (interfaces detected: [])
	I1213 09:36:35.402996   52766 retry.go:31] will retry after 388.363121ms: waiting for domain to come up
	I1213 09:36:35.792777   52766 main.go:143] libmachine: domain false-949855 has defined MAC address 52:54:00:f2:18:fc in network mk-false-949855
	I1213 09:36:35.793790   52766 main.go:143] libmachine: no network interface addresses found for domain false-949855 (source=lease)
	I1213 09:36:35.793813   52766 main.go:143] libmachine: trying to list again with source=arp
	I1213 09:36:35.794605   52766 main.go:143] libmachine: unable to find current IP address of domain false-949855 in network mk-false-949855 (interfaces detected: [])
	I1213 09:36:35.794648   52766 retry.go:31] will retry after 406.715621ms: waiting for domain to come up
	I1213 09:36:36.203685   52766 main.go:143] libmachine: domain false-949855 has defined MAC address 52:54:00:f2:18:fc in network mk-false-949855
	I1213 09:36:36.204646   52766 main.go:143] libmachine: no network interface addresses found for domain false-949855 (source=lease)
	I1213 09:36:36.204672   52766 main.go:143] libmachine: trying to list again with source=arp
	I1213 09:36:36.205190   52766 main.go:143] libmachine: unable to find current IP address of domain false-949855 in network mk-false-949855 (interfaces detected: [])
	I1213 09:36:36.205233   52766 retry.go:31] will retry after 462.279363ms: waiting for domain to come up
	I1213 09:36:36.669862   52766 main.go:143] libmachine: domain false-949855 has defined MAC address 52:54:00:f2:18:fc in network mk-false-949855
	I1213 09:36:36.670863   52766 main.go:143] libmachine: no network interface addresses found for domain false-949855 (source=lease)
	I1213 09:36:36.670891   52766 main.go:143] libmachine: trying to list again with source=arp
	I1213 09:36:36.671532   52766 main.go:143] libmachine: unable to find current IP address of domain false-949855 in network mk-false-949855 (interfaces detected: [])
	I1213 09:36:36.671576   52766 retry.go:31] will retry after 717.301952ms: waiting for domain to come up
	I1213 09:36:37.390701   52766 main.go:143] libmachine: domain false-949855 has defined MAC address 52:54:00:f2:18:fc in network mk-false-949855
	I1213 09:36:37.391903   52766 main.go:143] libmachine: no network interface addresses found for domain false-949855 (source=lease)
	I1213 09:36:37.391928   52766 main.go:143] libmachine: trying to list again with source=arp
	I1213 09:36:37.392632   52766 main.go:143] libmachine: unable to find current IP address of domain false-949855 in network mk-false-949855 (interfaces detected: [])
	I1213 09:36:37.392700   52766 retry.go:31] will retry after 844.713524ms: waiting for domain to come up
	I1213 09:36:37.127126   51535 addons.go:530] duration metric: took 2.252429436s for enable addons: enabled=[default-storageclass storage-provisioner]
	I1213 09:36:37.161666   51535 cache_images.go:291] Loading image from: /home/jenkins/minikube-integration/22128-9390/.minikube/cache/images/amd64/gcr.io/k8s-minikube/gvisor-addon_2
	I1213 09:36:37.161821   51535 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/images/gvisor-addon_2
	I1213 09:36:37.181625   51535 ssh_runner.go:352] existence check for /var/lib/minikube/images/gvisor-addon_2: stat -c "%s %y" /var/lib/minikube/images/gvisor-addon_2: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/images/gvisor-addon_2': No such file or directory
	I1213 09:36:37.181671   51535 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22128-9390/.minikube/cache/images/amd64/gcr.io/k8s-minikube/gvisor-addon_2 --> /var/lib/minikube/images/gvisor-addon_2 (12606976 bytes)
	I1213 09:36:37.633572   51535 kapi.go:214] "coredns" deployment in "kube-system" namespace and "custom-flannel-949855" context rescaled to 1 replicas
	I1213 09:36:37.760162   51535 docker.go:305] Loading image: /var/lib/minikube/images/gvisor-addon_2
	I1213 09:36:37.760209   51535 ssh_runner.go:195] Run: /bin/bash -c "sudo cat /var/lib/minikube/images/gvisor-addon_2 | docker load"
	I1213 09:36:38.836731   51535 ssh_runner.go:235] Completed: /bin/bash -c "sudo cat /var/lib/minikube/images/gvisor-addon_2 | docker load": (1.076497445s)
	I1213 09:36:38.836759   51535 cache_images.go:323] Transferred and loaded /home/jenkins/minikube-integration/22128-9390/.minikube/cache/images/amd64/gcr.io/k8s-minikube/gvisor-addon_2 from cache
	I1213 09:36:38.836798   51535 cache_images.go:125] Successfully loaded all cached images
	I1213 09:36:38.836806   51535 cache_images.go:94] duration metric: took 2.92061611s to LoadCachedImages
	I1213 09:36:38.836814   51535 cache_images.go:264] succeeded pushing to: custom-flannel-949855
	W1213 09:36:39.133273   51535 node_ready.go:57] node "custom-flannel-949855" has "Ready":"False" status (will retry)
	I1213 09:36:38.233302   50577 system_pods.go:86] 9 kube-system pods found
	I1213 09:36:38.233365   50577 system_pods.go:89] "calico-kube-controllers-5c676f698c-mn5lt" [d090b034-f709-45a5-ae36-0df9d318f18f] Pending / Ready:ContainersNotReady (containers with unready status: [calico-kube-controllers]) / ContainersReady:ContainersNotReady (containers with unready status: [calico-kube-controllers])
	I1213 09:36:38.233381   50577 system_pods.go:89] "calico-node-ktf92" [3135da63-b9e4-4382-9209-e980d38f7768] Pending / Initialized:ContainersNotInitialized (containers with incomplete status: [ebpf-bootstrap]) / Ready:ContainersNotReady (containers with unready status: [calico-node]) / ContainersReady:ContainersNotReady (containers with unready status: [calico-node])
	I1213 09:36:38.233392   50577 system_pods.go:89] "coredns-66bc5c9577-g9rr9" [8710f185-92d4-4694-bcbf-258da3c3aee3] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1213 09:36:38.233398   50577 system_pods.go:89] "etcd-calico-949855" [cb0247f0-9e7a-4d22-a4e6-db35a7b199ee] Running
	I1213 09:36:38.233405   50577 system_pods.go:89] "kube-apiserver-calico-949855" [43f43361-238a-4edf-acf1-b7563438a8a5] Running
	I1213 09:36:38.233418   50577 system_pods.go:89] "kube-controller-manager-calico-949855" [4a0a3c98-de85-4e66-b0df-c7c124ce9885] Running
	I1213 09:36:38.233424   50577 system_pods.go:89] "kube-proxy-vdgk9" [39b7aee3-6998-48f4-af6b-b629c648731f] Running
	I1213 09:36:38.233431   50577 system_pods.go:89] "kube-scheduler-calico-949855" [5ab00101-de5c-4ff3-a924-7afdd523a0fb] Running
	I1213 09:36:38.233437   50577 system_pods.go:89] "storage-provisioner" [95f85e24-f720-4da8-b6c0-3a3fe90edf56] Running
	I1213 09:36:38.233456   50577 retry.go:31] will retry after 2.187269566s: missing components: kube-dns
	I1213 09:36:40.439567   50577 system_pods.go:86] 9 kube-system pods found
	I1213 09:36:40.439610   50577 system_pods.go:89] "calico-kube-controllers-5c676f698c-mn5lt" [d090b034-f709-45a5-ae36-0df9d318f18f] Pending / Ready:ContainersNotReady (containers with unready status: [calico-kube-controllers]) / ContainersReady:ContainersNotReady (containers with unready status: [calico-kube-controllers])
	I1213 09:36:40.439623   50577 system_pods.go:89] "calico-node-ktf92" [3135da63-b9e4-4382-9209-e980d38f7768] Pending / Initialized:ContainersNotInitialized (containers with incomplete status: [ebpf-bootstrap]) / Ready:ContainersNotReady (containers with unready status: [calico-node]) / ContainersReady:ContainersNotReady (containers with unready status: [calico-node])
	I1213 09:36:40.439633   50577 system_pods.go:89] "coredns-66bc5c9577-g9rr9" [8710f185-92d4-4694-bcbf-258da3c3aee3] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1213 09:36:40.439639   50577 system_pods.go:89] "etcd-calico-949855" [cb0247f0-9e7a-4d22-a4e6-db35a7b199ee] Running
	I1213 09:36:40.439646   50577 system_pods.go:89] "kube-apiserver-calico-949855" [43f43361-238a-4edf-acf1-b7563438a8a5] Running
	I1213 09:36:40.439652   50577 system_pods.go:89] "kube-controller-manager-calico-949855" [4a0a3c98-de85-4e66-b0df-c7c124ce9885] Running
	I1213 09:36:40.439657   50577 system_pods.go:89] "kube-proxy-vdgk9" [39b7aee3-6998-48f4-af6b-b629c648731f] Running
	I1213 09:36:40.439662   50577 system_pods.go:89] "kube-scheduler-calico-949855" [5ab00101-de5c-4ff3-a924-7afdd523a0fb] Running
	I1213 09:36:40.439667   50577 system_pods.go:89] "storage-provisioner" [95f85e24-f720-4da8-b6c0-3a3fe90edf56] Running
	I1213 09:36:40.439687   50577 retry.go:31] will retry after 3.160505042s: missing components: kube-dns
	
	
	==> Docker <==
	Dec 13 09:35:44 default-k8s-diff-port-018953 dockerd[1181]: time="2025-12-13T09:35:44.376831829Z" level=warning msg="reference for unknown type: application/vnd.docker.distribution.manifest.v1+prettyjws" digest="sha256:5d99aa1120524c801bc8c1a7077e8f5ec122ba16b6dda1a5d3826057f67b9bcb" remote="registry.k8s.io/echoserver:1.4"
	Dec 13 09:35:44 default-k8s-diff-port-018953 dockerd[1181]: time="2025-12-13T09:35:44.500522910Z" level=warning msg="Error persisting manifest" digest="sha256:5d99aa1120524c801bc8c1a7077e8f5ec122ba16b6dda1a5d3826057f67b9bcb" error="error committing manifest to content store: commit failed: unexpected commit digest sha256:eaee4c452b076cdb05b391ed7e75e1ad0aca136665875ab5d7e2f3d9f4675769, expected sha256:5d99aa1120524c801bc8c1a7077e8f5ec122ba16b6dda1a5d3826057f67b9bcb: failed precondition" remote="registry.k8s.io/echoserver:1.4"
	Dec 13 09:35:44 default-k8s-diff-port-018953 dockerd[1181]: time="2025-12-13T09:35:44.500664801Z" level=info msg="Attempting next endpoint for pull after error: Docker Image Format v1 and Docker Image manifest version 2, schema 1 support has been removed. Suggest the author of registry.k8s.io/echoserver:1.4 to upgrade the image to the OCI Format or Docker Image manifest v2, schema 2. More information at https://docs.docker.com/go/deprecated-image-specs/"
	Dec 13 09:35:44 default-k8s-diff-port-018953 cri-dockerd[1562]: time="2025-12-13T09:35:44Z" level=info msg="Stop pulling image registry.k8s.io/echoserver:1.4: 1.4: Pulling from echoserver"
	Dec 13 09:35:44 default-k8s-diff-port-018953 dockerd[1181]: time="2025-12-13T09:35:44.739624053Z" level=warning msg="reference for unknown type: " digest="sha256:2e500d29e9d5f4a086b908eb8dfe7ecac57d2ab09d65b24f588b1d449841ef93" remote="docker.io/kubernetesui/dashboard@sha256:2e500d29e9d5f4a086b908eb8dfe7ecac57d2ab09d65b24f588b1d449841ef93"
	Dec 13 09:35:52 default-k8s-diff-port-018953 cri-dockerd[1562]: time="2025-12-13T09:35:52Z" level=info msg="Stop pulling image docker.io/kubernetesui/dashboard:v2.7.0@sha256:2e500d29e9d5f4a086b908eb8dfe7ecac57d2ab09d65b24f588b1d449841ef93: Status: Downloaded newer image for kubernetesui/dashboard@sha256:2e500d29e9d5f4a086b908eb8dfe7ecac57d2ab09d65b24f588b1d449841ef93"
	Dec 13 09:35:55 default-k8s-diff-port-018953 dockerd[1181]: time="2025-12-13T09:35:55.231255405Z" level=warning msg="Error getting v2 registry: Get \"https://fake.domain/v2/\": dial tcp: lookup fake.domain on 192.168.122.1:53: no such host"
	Dec 13 09:35:55 default-k8s-diff-port-018953 dockerd[1181]: time="2025-12-13T09:35:55.231601592Z" level=info msg="Attempting next endpoint for pull after error: Get \"https://fake.domain/v2/\": dial tcp: lookup fake.domain on 192.168.122.1:53: no such host"
	Dec 13 09:35:55 default-k8s-diff-port-018953 dockerd[1181]: time="2025-12-13T09:35:55.237473779Z" level=error msg="unexpected HTTP error handling" error="<nil>"
	Dec 13 09:35:55 default-k8s-diff-port-018953 dockerd[1181]: time="2025-12-13T09:35:55.237863871Z" level=error msg="Handler for POST /v1.46/images/create returned error: Get \"https://fake.domain/v2/\": dial tcp: lookup fake.domain on 192.168.122.1:53: no such host"
	Dec 13 09:35:57 default-k8s-diff-port-018953 dockerd[1181]: time="2025-12-13T09:35:57.336744894Z" level=warning msg="reference for unknown type: application/vnd.docker.distribution.manifest.v1+prettyjws" digest="sha256:5d99aa1120524c801bc8c1a7077e8f5ec122ba16b6dda1a5d3826057f67b9bcb" remote="registry.k8s.io/echoserver:1.4"
	Dec 13 09:35:57 default-k8s-diff-port-018953 dockerd[1181]: time="2025-12-13T09:35:57.413925743Z" level=warning msg="Error persisting manifest" digest="sha256:5d99aa1120524c801bc8c1a7077e8f5ec122ba16b6dda1a5d3826057f67b9bcb" error="error committing manifest to content store: commit failed: unexpected commit digest sha256:eaee4c452b076cdb05b391ed7e75e1ad0aca136665875ab5d7e2f3d9f4675769, expected sha256:5d99aa1120524c801bc8c1a7077e8f5ec122ba16b6dda1a5d3826057f67b9bcb: failed precondition" remote="registry.k8s.io/echoserver:1.4"
	Dec 13 09:35:57 default-k8s-diff-port-018953 dockerd[1181]: time="2025-12-13T09:35:57.414081115Z" level=info msg="Attempting next endpoint for pull after error: Docker Image Format v1 and Docker Image manifest version 2, schema 1 support has been removed. Suggest the author of registry.k8s.io/echoserver:1.4 to upgrade the image to the OCI Format or Docker Image manifest v2, schema 2. More information at https://docs.docker.com/go/deprecated-image-specs/"
	Dec 13 09:35:57 default-k8s-diff-port-018953 cri-dockerd[1562]: time="2025-12-13T09:35:57Z" level=info msg="Stop pulling image registry.k8s.io/echoserver:1.4: 1.4: Pulling from echoserver"
	Dec 13 09:36:06 default-k8s-diff-port-018953 dockerd[1181]: time="2025-12-13T09:36:06.768498035Z" level=info msg="ignoring event" container=aae5aab2e50bdc9408b701e65fed0ce7ed1adc3ead912c1e8d8038523e63d827 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Dec 13 09:36:41 default-k8s-diff-port-018953 cri-dockerd[1562]: time="2025-12-13T09:36:41Z" level=info msg="Failed to read pod IP from plugin/docker: networkPlugin cni failed on the status hook for pod \"coredns-66bc5c9577-g6zqx_kube-system\": CNI failed to retrieve network namespace path: cannot find network namespace for the terminated container \"fb14dd0254c8a062a26d1c7681fafeb8ca5202bcd8b02460c1e2e2548aa86f10\""
	Dec 13 09:36:41 default-k8s-diff-port-018953 cri-dockerd[1562]: time="2025-12-13T09:36:41Z" level=info msg="Docker cri received runtime config &RuntimeConfig{NetworkConfig:&NetworkConfig{PodCidr:10.244.0.0/24,},}"
	Dec 13 09:36:42 default-k8s-diff-port-018953 dockerd[1181]: time="2025-12-13T09:36:42.378414744Z" level=warning msg="reference for unknown type: application/vnd.docker.distribution.manifest.v1+prettyjws" digest="sha256:5d99aa1120524c801bc8c1a7077e8f5ec122ba16b6dda1a5d3826057f67b9bcb" remote="registry.k8s.io/echoserver:1.4"
	Dec 13 09:36:42 default-k8s-diff-port-018953 dockerd[1181]: time="2025-12-13T09:36:42.541550846Z" level=warning msg="Error persisting manifest" digest="sha256:5d99aa1120524c801bc8c1a7077e8f5ec122ba16b6dda1a5d3826057f67b9bcb" error="error committing manifest to content store: commit failed: unexpected commit digest sha256:eaee4c452b076cdb05b391ed7e75e1ad0aca136665875ab5d7e2f3d9f4675769, expected sha256:5d99aa1120524c801bc8c1a7077e8f5ec122ba16b6dda1a5d3826057f67b9bcb: failed precondition" remote="registry.k8s.io/echoserver:1.4"
	Dec 13 09:36:42 default-k8s-diff-port-018953 dockerd[1181]: time="2025-12-13T09:36:42.541669429Z" level=info msg="Attempting next endpoint for pull after error: Docker Image Format v1 and Docker Image manifest version 2, schema 1 support has been removed. Suggest the author of registry.k8s.io/echoserver:1.4 to upgrade the image to the OCI Format or Docker Image manifest v2, schema 2. More information at https://docs.docker.com/go/deprecated-image-specs/"
	Dec 13 09:36:42 default-k8s-diff-port-018953 cri-dockerd[1562]: time="2025-12-13T09:36:42Z" level=info msg="Stop pulling image registry.k8s.io/echoserver:1.4: 1.4: Pulling from echoserver"
	Dec 13 09:36:42 default-k8s-diff-port-018953 dockerd[1181]: time="2025-12-13T09:36:42.577508938Z" level=warning msg="Error getting v2 registry: Get \"https://fake.domain/v2/\": dial tcp: lookup fake.domain on 192.168.122.1:53: no such host"
	Dec 13 09:36:42 default-k8s-diff-port-018953 dockerd[1181]: time="2025-12-13T09:36:42.577555368Z" level=info msg="Attempting next endpoint for pull after error: Get \"https://fake.domain/v2/\": dial tcp: lookup fake.domain on 192.168.122.1:53: no such host"
	Dec 13 09:36:42 default-k8s-diff-port-018953 dockerd[1181]: time="2025-12-13T09:36:42.582796110Z" level=error msg="unexpected HTTP error handling" error="<nil>"
	Dec 13 09:36:42 default-k8s-diff-port-018953 dockerd[1181]: time="2025-12-13T09:36:42.582861384Z" level=error msg="Handler for POST /v1.46/images/create returned error: Get \"https://fake.domain/v2/\": dial tcp: lookup fake.domain on 192.168.122.1:53: no such host"
	
	
	==> container status <==
	CONTAINER           IMAGE                                                                                                 CREATED                  STATE               NAME                      ATTEMPT             POD ID              POD                                                    NAMESPACE
	a17e1cb8b1325       6e38f40d628db                                                                                         Less than a second ago   Running             storage-provisioner       2                   d95370cb4e448       storage-provisioner                                    kube-system
	9113189d0c04e       kubernetesui/dashboard@sha256:2e500d29e9d5f4a086b908eb8dfe7ecac57d2ab09d65b24f588b1d449841ef93        50 seconds ago           Running             kubernetes-dashboard      0                   c7ad4ecefcb75       kubernetes-dashboard-855c9754f9-p2brd                  kubernetes-dashboard
	5f700aa530657       56cc512116c8f                                                                                         59 seconds ago           Running             busybox                   1                   ac125119b9779       busybox                                                default
	2b439f04f5fc3       52546a367cc9e                                                                                         59 seconds ago           Running             coredns                   1                   5bbb8e3707181       coredns-66bc5c9577-8lnrn                               kube-system
	aae5aab2e50bd       6e38f40d628db                                                                                         About a minute ago       Exited              storage-provisioner       1                   d95370cb4e448       storage-provisioner                                    kube-system
	dca507ceebdbc       8aa150647e88a                                                                                         About a minute ago       Running             kube-proxy                1                   6d67df135cd3f       kube-proxy-bjk4k                                       kube-system
	914937023d7a6       88320b5498ff2                                                                                         About a minute ago       Running             kube-scheduler            1                   74c50b87c8c62       kube-scheduler-default-k8s-diff-port-018953            kube-system
	64509f24cd6b6       a3e246e9556e9                                                                                         About a minute ago       Running             etcd                      1                   598b601dba63a       etcd-default-k8s-diff-port-018953                      kube-system
	a16e34261234d       a5f569d49a979                                                                                         About a minute ago       Running             kube-apiserver            1                   0a8e2bae44573       kube-apiserver-default-k8s-diff-port-018953            kube-system
	ac89bd32a6231       01e8bacf0f500                                                                                         About a minute ago       Running             kube-controller-manager   1                   81ece62c71ee5       kube-controller-manager-default-k8s-diff-port-018953   kube-system
	d3a0413e34a4d       gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e   2 minutes ago            Exited              busybox                   0                   fd18283ce6f4d       busybox                                                default
	53a66219a044a       52546a367cc9e                                                                                         2 minutes ago            Exited              coredns                   0                   b93881983858e       coredns-66bc5c9577-8lnrn                               kube-system
	dd7e0f10d15aa       8aa150647e88a                                                                                         3 minutes ago            Exited              kube-proxy                0                   a670e4e9f57dc       kube-proxy-bjk4k                                       kube-system
	b6f1305f6ae8c       88320b5498ff2                                                                                         3 minutes ago            Exited              kube-scheduler            0                   8737390be5970       kube-scheduler-default-k8s-diff-port-018953            kube-system
	1042acfc9b5d0       a5f569d49a979                                                                                         3 minutes ago            Exited              kube-apiserver            0                   c56aa66442294       kube-apiserver-default-k8s-diff-port-018953            kube-system
	2f91314a94996       01e8bacf0f500                                                                                         3 minutes ago            Exited              kube-controller-manager   0                   e539015ef8b88       kube-controller-manager-default-k8s-diff-port-018953   kube-system
	aef3408be0d4d       a3e246e9556e9                                                                                         3 minutes ago            Exited              etcd                      0                   3c1955e0956eb       etcd-default-k8s-diff-port-018953                      kube-system
	
	
	==> coredns [2b439f04f5fc] <==
	maxprocs: Leaving GOMAXPROCS=2: CPU quota undefined
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = 6e77f21cd6946b87ec86c565e2060aa5d23c02882cb22fd7a321b5e8cd0c8bdafe21968fcff406405707b988b753da21ecd190fe02329f1b569bfa74920cc0fa
	CoreDNS-1.12.1
	linux/amd64, go1.24.1, 707c7c1
	[INFO] 127.0.0.1:36785 - 864 "HINFO IN 6554980673800972526.482035092620239871. udp 56 false 512" NXDOMAIN qr,rd,ra 131 0.186366491s
	
	
	==> coredns [53a66219a044] <==
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[WARNING] plugin/kubernetes: starting server with unsynced Kubernetes API
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = 1b226df79860026c6a52e67daa10d7f0d57ec5b023288ec00c5e05f93523c894564e15b91770d3a07ae1cfbe861d15b37d4a0027e69c546ab112970993a3b03b
	CoreDNS-1.12.1
	linux/amd64, go1.24.1, 707c7c1
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.32.3/tools/cache/reflector.go:251: failed to list *v1.Namespace: Get "https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: i/o timeout
	[ERROR] plugin/kubernetes: Unhandled Error
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.32.3/tools/cache/reflector.go:251: failed to list *v1.EndpointSlice: Get "https://10.96.0.1:443/apis/discovery.k8s.io/v1/endpointslices?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: i/o timeout
	[ERROR] plugin/kubernetes: Unhandled Error
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.32.3/tools/cache/reflector.go:251: failed to list *v1.Service: Get "https://10.96.0.1:443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: i/o timeout
	[ERROR] plugin/kubernetes: Unhandled Error
	[INFO] Reloading
	[INFO] plugin/reload: Running configuration SHA512 = 6e77f21cd6946b87ec86c565e2060aa5d23c02882cb22fd7a321b5e8cd0c8bdafe21968fcff406405707b988b753da21ecd190fe02329f1b569bfa74920cc0fa
	[INFO] Reloading complete
	[INFO] 127.0.0.1:43616 - 2659 "HINFO IN 6498301113996982301.4561126906837148237. udp 57 false 512" NXDOMAIN qr,rd,ra 132 0.12155845s
	[INFO] SIGTERM: Shutting down servers then terminating
	[INFO] plugin/health: Going into lameduck mode for 5s
	
	
	==> describe nodes <==
	Name:               default-k8s-diff-port-018953
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=default-k8s-diff-port-018953
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=fb16b7642350f383695d44d1e88d7327f6f14453
	                    minikube.k8s.io/name=default-k8s-diff-port-018953
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2025_12_13T09_33_36_0700
	                    minikube.k8s.io/version=v1.37.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Sat, 13 Dec 2025 09:33:33 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  default-k8s-diff-port-018953
	  AcquireTime:     <unset>
	  RenewTime:       Sat, 13 Dec 2025 09:36:41 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Sat, 13 Dec 2025 09:36:41 +0000   Sat, 13 Dec 2025 09:33:30 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Sat, 13 Dec 2025 09:36:41 +0000   Sat, 13 Dec 2025 09:33:30 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Sat, 13 Dec 2025 09:36:41 +0000   Sat, 13 Dec 2025 09:33:30 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Sat, 13 Dec 2025 09:36:41 +0000   Sat, 13 Dec 2025 09:35:37 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.50.59
	  Hostname:    default-k8s-diff-port-018953
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             3035912Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             3035912Ki
	  pods:               110
	System Info:
	  Machine ID:                 4398f09173e84f11abbe3edaa9d2c77b
	  System UUID:                4398f091-73e8-4f11-abbe-3edaa9d2c77b
	  Boot ID:                    5588dae1-1009-43b7-ae1d-20ec2f0d5449
	  Kernel Version:             6.6.95
	  OS Image:                   Buildroot 2025.02
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  docker://28.5.2
	  Kubelet Version:            v1.34.2
	  Kube-Proxy Version:         
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (11 in total)
	  Namespace                   Name                                                    CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                                    ------------  ----------  ---------------  -------------  ---
	  default                     busybox                                                 0 (0%)        0 (0%)      0 (0%)           0 (0%)         2m24s
	  kube-system                 coredns-66bc5c9577-8lnrn                                100m (5%)     0 (0%)      70Mi (2%)        170Mi (5%)     3m1s
	  kube-system                 etcd-default-k8s-diff-port-018953                       100m (5%)     0 (0%)      100Mi (3%)       0 (0%)         3m7s
	  kube-system                 kube-apiserver-default-k8s-diff-port-018953             250m (12%)    0 (0%)      0 (0%)           0 (0%)         3m7s
	  kube-system                 kube-controller-manager-default-k8s-diff-port-018953    200m (10%)    0 (0%)      0 (0%)           0 (0%)         3m8s
	  kube-system                 kube-proxy-bjk4k                                        0 (0%)        0 (0%)      0 (0%)           0 (0%)         3m2s
	  kube-system                 kube-scheduler-default-k8s-diff-port-018953             100m (5%)     0 (0%)      0 (0%)           0 (0%)         3m7s
	  kube-system                 metrics-server-746fcd58dc-lbqfl                         100m (5%)     0 (0%)      200Mi (6%)       0 (0%)         2m13s
	  kube-system                 storage-provisioner                                     0 (0%)        0 (0%)      0 (0%)           0 (0%)         2m58s
	  kubernetes-dashboard        dashboard-metrics-scraper-6ffb444bf9-trhxx              0 (0%)        0 (0%)      0 (0%)           0 (0%)         62s
	  kubernetes-dashboard        kubernetes-dashboard-855c9754f9-p2brd                   0 (0%)        0 (0%)      0 (0%)           0 (0%)         62s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests     Limits
	  --------           --------     ------
	  cpu                850m (42%)   0 (0%)
	  memory             370Mi (12%)  170Mi (5%)
	  ephemeral-storage  0 (0%)       0 (0%)
	  hugepages-2Mi      0 (0%)       0 (0%)
	Events:
	  Type     Reason                   Age                From             Message
	  ----     ------                   ----               ----             -------
	  Normal   Starting                 2m59s              kube-proxy       
	  Normal   Starting                 65s                kube-proxy       
	  Normal   NodeHasNoDiskPressure    3m7s               kubelet          Node default-k8s-diff-port-018953 status is now: NodeHasNoDiskPressure
	  Normal   NodeAllocatableEnforced  3m7s               kubelet          Updated Node Allocatable limit across pods
	  Normal   NodeHasSufficientMemory  3m7s               kubelet          Node default-k8s-diff-port-018953 status is now: NodeHasSufficientMemory
	  Normal   NodeHasSufficientPID     3m7s               kubelet          Node default-k8s-diff-port-018953 status is now: NodeHasSufficientPID
	  Normal   Starting                 3m7s               kubelet          Starting kubelet.
	  Normal   NodeReady                3m4s               kubelet          Node default-k8s-diff-port-018953 status is now: NodeReady
	  Normal   RegisteredNode           3m3s               node-controller  Node default-k8s-diff-port-018953 event: Registered Node default-k8s-diff-port-018953 in Controller
	  Normal   NodeHasNoDiskPressure    76s (x8 over 76s)  kubelet          Node default-k8s-diff-port-018953 status is now: NodeHasNoDiskPressure
	  Normal   NodeHasSufficientMemory  76s (x8 over 76s)  kubelet          Node default-k8s-diff-port-018953 status is now: NodeHasSufficientMemory
	  Normal   Starting                 76s                kubelet          Starting kubelet.
	  Normal   NodeHasSufficientPID     76s (x7 over 76s)  kubelet          Node default-k8s-diff-port-018953 status is now: NodeHasSufficientPID
	  Normal   NodeAllocatableEnforced  76s                kubelet          Updated Node Allocatable limit across pods
	  Warning  Rebooted                 69s                kubelet          Node default-k8s-diff-port-018953 has been rebooted, boot id: 5588dae1-1009-43b7-ae1d-20ec2f0d5449
	  Normal   RegisteredNode           64s                node-controller  Node default-k8s-diff-port-018953 event: Registered Node default-k8s-diff-port-018953 in Controller
	  Normal   Starting                 3s                 kubelet          Starting kubelet.
	  Normal   NodeAllocatableEnforced  2s                 kubelet          Updated Node Allocatable limit across pods
	  Normal   NodeHasSufficientMemory  2s                 kubelet          Node default-k8s-diff-port-018953 status is now: NodeHasSufficientMemory
	  Normal   NodeHasNoDiskPressure    2s                 kubelet          Node default-k8s-diff-port-018953 status is now: NodeHasNoDiskPressure
	  Normal   NodeHasSufficientPID     2s                 kubelet          Node default-k8s-diff-port-018953 status is now: NodeHasSufficientPID
	
	
	==> dmesg <==
	[Dec13 09:35] Booted with the nomodeset parameter. Only the system framebuffer will be available
	[  +0.000010] acpi PNP0A03:00: fail to add MMCONFIG information, can't access extended configuration space under this bridge
	[  +0.001469] platform regulatory.0: Direct firmware load for regulatory.db failed with error -2
	[  +0.004195] (rpcbind)[119]: rpcbind.service: Referenced but unset environment variable evaluates to an empty string: RPCBIND_OPTIONS
	[  +0.876292] NFSD: Using /var/lib/nfs/v4recovery as the NFSv4 state recovery directory
	[  +0.000018] NFSD: unable to find recovery directory /var/lib/nfs/v4recovery
	[  +0.000001] NFSD: Unable to initialize client recovery tracking! (-2)
	[  +0.132505] kauditd_printk_skb: 1 callbacks suppressed
	[  +0.149494] kauditd_printk_skb: 421 callbacks suppressed
	[  +1.761194] kauditd_printk_skb: 73 callbacks suppressed
	[  +7.356585] kauditd_printk_skb: 110 callbacks suppressed
	[  +1.084317] kauditd_printk_skb: 204 callbacks suppressed
	[  +2.570266] kauditd_printk_skb: 183 callbacks suppressed
	[Dec13 09:36] kauditd_printk_skb: 11 callbacks suppressed
	[  +0.295171] kauditd_printk_skb: 35 callbacks suppressed
	
	
	==> etcd [64509f24cd6b] <==
	{"level":"warn","ts":"2025-12-13T09:35:33.097246Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:46348","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-13T09:35:33.124604Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:46358","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-13T09:35:33.164603Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:46386","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-13T09:35:33.220986Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:46398","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-13T09:35:33.248442Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:46412","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-13T09:35:33.284098Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:46428","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-13T09:35:33.294789Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:46450","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-13T09:35:33.314196Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:46470","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-13T09:35:33.420999Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:46490","server-name":"","error":"EOF"}
	{"level":"info","ts":"2025-12-13T09:36:41.604563Z","caller":"traceutil/trace.go:172","msg":"trace[1472656459] transaction","detail":"{read_only:false; response_revision:722; number_of_response:1; }","duration":"136.84303ms","start":"2025-12-13T09:36:41.467700Z","end":"2025-12-13T09:36:41.604544Z","steps":["trace[1472656459] 'process raft request'  (duration: 136.733287ms)"],"step_count":1}
	{"level":"warn","ts":"2025-12-13T09:36:41.605300Z","caller":"txn/util.go:93","msg":"apply request took too long","took":"126.476854ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/limitranges\" limit:1 ","response":"range_response_count:0 size:5"}
	{"level":"info","ts":"2025-12-13T09:36:41.605385Z","caller":"traceutil/trace.go:172","msg":"trace[513915585] range","detail":"{range_begin:/registry/limitranges; range_end:; response_count:0; response_revision:722; }","duration":"126.6191ms","start":"2025-12-13T09:36:41.478747Z","end":"2025-12-13T09:36:41.605366Z","steps":["trace[513915585] 'agreement among raft nodes before linearized reading'  (duration: 126.396483ms)"],"step_count":1}
	{"level":"info","ts":"2025-12-13T09:36:41.731713Z","caller":"traceutil/trace.go:172","msg":"trace[2119032866] linearizableReadLoop","detail":"{readStateIndex:778; appliedIndex:778; }","duration":"102.94709ms","start":"2025-12-13T09:36:41.628522Z","end":"2025-12-13T09:36:41.731469Z","steps":["trace[2119032866] 'read index received'  (duration: 102.936527ms)","trace[2119032866] 'applied index is now lower than readState.Index'  (duration: 9.158µs)"],"step_count":2}
	{"level":"info","ts":"2025-12-13T09:36:41.760605Z","caller":"traceutil/trace.go:172","msg":"trace[2019165356] transaction","detail":"{read_only:false; number_of_response:0; response_revision:722; }","duration":"135.332186ms","start":"2025-12-13T09:36:41.625252Z","end":"2025-12-13T09:36:41.760585Z","steps":["trace[2019165356] 'process raft request'  (duration: 107.054817ms)","trace[2019165356] 'compare'  (duration: 28.228942ms)"],"step_count":2}
	{"level":"warn","ts":"2025-12-13T09:36:41.761102Z","caller":"txn/util.go:93","msg":"apply request took too long","took":"132.48897ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/services/specs\" limit:1 ","response":"range_response_count:0 size:5"}
	{"level":"info","ts":"2025-12-13T09:36:41.761145Z","caller":"traceutil/trace.go:172","msg":"trace[545500457] range","detail":"{range_begin:/registry/services/specs; range_end:; response_count:0; response_revision:722; }","duration":"132.614996ms","start":"2025-12-13T09:36:41.628516Z","end":"2025-12-13T09:36:41.761131Z","steps":["trace[545500457] 'agreement among raft nodes before linearized reading'  (duration: 103.335875ms)","trace[545500457] 'range keys from in-memory index tree'  (duration: 29.130872ms)"],"step_count":2}
	{"level":"info","ts":"2025-12-13T09:36:41.856396Z","caller":"traceutil/trace.go:172","msg":"trace[383765451] linearizableReadLoop","detail":"{readStateIndex:779; appliedIndex:779; }","duration":"124.550221ms","start":"2025-12-13T09:36:41.731828Z","end":"2025-12-13T09:36:41.856379Z","steps":["trace[383765451] 'read index received'  (duration: 124.543153ms)","trace[383765451] 'applied index is now lower than readState.Index'  (duration: 6.218µs)"],"step_count":2}
	{"level":"warn","ts":"2025-12-13T09:36:41.856523Z","caller":"txn/util.go:93","msg":"apply request took too long","took":"188.01702ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/apiextensions.k8s.io/customresourcedefinitions\" limit:1 ","response":"range_response_count:0 size:5"}
	{"level":"info","ts":"2025-12-13T09:36:41.856540Z","caller":"traceutil/trace.go:172","msg":"trace[1107816314] range","detail":"{range_begin:/registry/apiextensions.k8s.io/customresourcedefinitions; range_end:; response_count:0; response_revision:722; }","duration":"188.05425ms","start":"2025-12-13T09:36:41.668482Z","end":"2025-12-13T09:36:41.856536Z","steps":["trace[1107816314] 'agreement among raft nodes before linearized reading'  (duration: 187.989836ms)"],"step_count":1}
	{"level":"warn","ts":"2025-12-13T09:36:41.856999Z","caller":"txn/util.go:93","msg":"apply request took too long","took":"224.037555ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/minions/default-k8s-diff-port-018953\" limit:1 ","response":"range_response_count:1 size:4733"}
	{"level":"info","ts":"2025-12-13T09:36:41.857118Z","caller":"traceutil/trace.go:172","msg":"trace[160251017] range","detail":"{range_begin:/registry/minions/default-k8s-diff-port-018953; range_end:; response_count:1; response_revision:722; }","duration":"224.162611ms","start":"2025-12-13T09:36:41.632944Z","end":"2025-12-13T09:36:41.857106Z","steps":["trace[160251017] 'agreement among raft nodes before linearized reading'  (duration: 223.943602ms)"],"step_count":1}
	{"level":"info","ts":"2025-12-13T09:36:41.857893Z","caller":"traceutil/trace.go:172","msg":"trace[1822799367] transaction","detail":"{read_only:false; response_revision:723; number_of_response:1; }","duration":"230.912908ms","start":"2025-12-13T09:36:41.626965Z","end":"2025-12-13T09:36:41.857878Z","steps":["trace[1822799367] 'process raft request'  (duration: 230.535159ms)"],"step_count":1}
	{"level":"info","ts":"2025-12-13T09:36:41.858607Z","caller":"traceutil/trace.go:172","msg":"trace[1952376893] transaction","detail":"{read_only:false; number_of_response:0; response_revision:723; }","duration":"157.009688ms","start":"2025-12-13T09:36:41.701584Z","end":"2025-12-13T09:36:41.858593Z","steps":["trace[1952376893] 'process raft request'  (duration: 156.07451ms)"],"step_count":1}
	{"level":"info","ts":"2025-12-13T09:36:41.858823Z","caller":"traceutil/trace.go:172","msg":"trace[1123472395] transaction","detail":"{read_only:false; number_of_response:0; response_revision:723; }","duration":"224.667593ms","start":"2025-12-13T09:36:41.634147Z","end":"2025-12-13T09:36:41.858814Z","steps":["trace[1123472395] 'process raft request'  (duration: 223.440666ms)"],"step_count":1}
	{"level":"info","ts":"2025-12-13T09:36:41.858954Z","caller":"traceutil/trace.go:172","msg":"trace[655288941] transaction","detail":"{read_only:false; number_of_response:0; response_revision:723; }","duration":"189.020621ms","start":"2025-12-13T09:36:41.669927Z","end":"2025-12-13T09:36:41.858947Z","steps":["trace[655288941] 'process raft request'  (duration: 187.690908ms)"],"step_count":1}
	
	
	==> etcd [aef3408be0d4] <==
	{"level":"info","ts":"2025-12-13T09:33:38.161007Z","caller":"traceutil/trace.go:172","msg":"trace[423360871] range","detail":"{range_begin:/registry/serviceaccounts/kube-system/certificate-controller; range_end:; response_count:1; response_revision:276; }","duration":"201.246792ms","start":"2025-12-13T09:33:37.959749Z","end":"2025-12-13T09:33:38.160996Z","steps":["trace[423360871] 'agreement among raft nodes before linearized reading'  (duration: 149.547964ms)","trace[423360871] 'range keys from in-memory index tree'  (duration: 51.257162ms)"],"step_count":2}
	{"level":"warn","ts":"2025-12-13T09:33:38.346173Z","caller":"txn/util.go:93","msg":"apply request took too long","took":"145.302638ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/namespaces/kube-system\" limit:1 ","response":"range_response_count:1 size:351"}
	{"level":"info","ts":"2025-12-13T09:33:38.346260Z","caller":"traceutil/trace.go:172","msg":"trace[1776984165] range","detail":"{range_begin:/registry/namespaces/kube-system; range_end:; response_count:1; response_revision:277; }","duration":"145.398107ms","start":"2025-12-13T09:33:38.200845Z","end":"2025-12-13T09:33:38.346243Z","steps":["trace[1776984165] 'agreement among raft nodes before linearized reading'  (duration: 84.191909ms)","trace[1776984165] 'range keys from in-memory index tree'  (duration: 60.598183ms)"],"step_count":2}
	{"level":"info","ts":"2025-12-13T09:33:38.346513Z","caller":"traceutil/trace.go:172","msg":"trace[1272861528] transaction","detail":"{read_only:false; response_revision:278; number_of_response:1; }","duration":"149.00589ms","start":"2025-12-13T09:33:38.197469Z","end":"2025-12-13T09:33:38.346475Z","steps":["trace[1272861528] 'process raft request'  (duration: 87.608816ms)","trace[1272861528] 'compare'  (duration: 60.635698ms)"],"step_count":2}
	{"level":"warn","ts":"2025-12-13T09:33:38.608864Z","caller":"txn/util.go:93","msg":"apply request took too long","took":"145.57294ms","expected-duration":"100ms","prefix":"","request":"header:<ID:11123217178011021227 username:\"kube-apiserver-etcd-client\" auth_revision:1 > txn:<compare:<target:MOD key:\"/registry/serviceaccounts/kube-system/job-controller\" mod_revision:0 > success:<request_put:<key:\"/registry/serviceaccounts/kube-system/job-controller\" value_size:119 >> failure:<>>","response":"size:16"}
	{"level":"info","ts":"2025-12-13T09:33:38.608954Z","caller":"traceutil/trace.go:172","msg":"trace[191922514] transaction","detail":"{read_only:false; response_revision:279; number_of_response:1; }","duration":"249.397781ms","start":"2025-12-13T09:33:38.359546Z","end":"2025-12-13T09:33:38.608943Z","steps":["trace[191922514] 'process raft request'  (duration: 103.188496ms)","trace[191922514] 'compare'  (duration: 145.311701ms)"],"step_count":2}
	{"level":"info","ts":"2025-12-13T09:33:38.636561Z","caller":"traceutil/trace.go:172","msg":"trace[1798635197] transaction","detail":"{read_only:false; response_revision:280; number_of_response:1; }","duration":"268.484714ms","start":"2025-12-13T09:33:38.367981Z","end":"2025-12-13T09:33:38.636465Z","steps":["trace[1798635197] 'process raft request'  (duration: 267.17058ms)"],"step_count":1}
	{"level":"info","ts":"2025-12-13T09:34:31.546637Z","caller":"osutil/interrupt_unix.go:65","msg":"received signal; shutting down","signal":"terminated"}
	{"level":"info","ts":"2025-12-13T09:34:31.546760Z","caller":"embed/etcd.go:426","msg":"closing etcd server","name":"default-k8s-diff-port-018953","data-dir":"/var/lib/minikube/etcd","advertise-peer-urls":["https://192.168.50.59:2380"],"advertise-client-urls":["https://192.168.50.59:2379"]}
	{"level":"error","ts":"2025-12-13T09:34:31.546883Z","caller":"embed/etcd.go:912","msg":"setting up serving from embedded etcd failed.","error":"http: Server closed","stacktrace":"go.etcd.io/etcd/server/v3/embed.(*Etcd).errHandler\n\tgo.etcd.io/etcd/server/v3/embed/etcd.go:912\ngo.etcd.io/etcd/server/v3/embed.(*serveCtx).startHandler.func1\n\tgo.etcd.io/etcd/server/v3/embed/serve.go:90"}
	{"level":"error","ts":"2025-12-13T09:34:31.550325Z","caller":"embed/etcd.go:912","msg":"setting up serving from embedded etcd failed.","error":"http: Server closed","stacktrace":"go.etcd.io/etcd/server/v3/embed.(*Etcd).errHandler\n\tgo.etcd.io/etcd/server/v3/embed/etcd.go:912\ngo.etcd.io/etcd/server/v3/embed.(*serveCtx).startHandler.func1\n\tgo.etcd.io/etcd/server/v3/embed/serve.go:90"}
	{"level":"error","ts":"2025-12-13T09:34:38.553571Z","caller":"embed/etcd.go:912","msg":"setting up serving from embedded etcd failed.","error":"accept tcp 127.0.0.1:2381: use of closed network connection","stacktrace":"go.etcd.io/etcd/server/v3/embed.(*Etcd).errHandler\n\tgo.etcd.io/etcd/server/v3/embed/etcd.go:912\ngo.etcd.io/etcd/server/v3/embed.(*Etcd).startHandler.func1\n\tgo.etcd.io/etcd/server/v3/embed/etcd.go:906"}
	{"level":"info","ts":"2025-12-13T09:34:38.553627Z","caller":"etcdserver/server.go:1297","msg":"skipped leadership transfer for single voting member cluster","local-member-id":"602de89049e69a5d","current-leader-member-id":"602de89049e69a5d"}
	{"level":"info","ts":"2025-12-13T09:34:38.553850Z","caller":"etcdserver/server.go:2358","msg":"server has stopped; stopping storage version's monitor"}
	{"level":"info","ts":"2025-12-13T09:34:38.553862Z","caller":"etcdserver/server.go:2335","msg":"server has stopped; stopping cluster version's monitor"}
	{"level":"warn","ts":"2025-12-13T09:34:38.556612Z","caller":"embed/serve.go:245","msg":"stopping secure grpc server due to error","error":"accept tcp 127.0.0.1:2379: use of closed network connection"}
	{"level":"warn","ts":"2025-12-13T09:34:38.556706Z","caller":"embed/serve.go:247","msg":"stopped secure grpc server due to error","error":"accept tcp 127.0.0.1:2379: use of closed network connection"}
	{"level":"error","ts":"2025-12-13T09:34:38.556718Z","caller":"embed/etcd.go:912","msg":"setting up serving from embedded etcd failed.","error":"accept tcp 127.0.0.1:2379: use of closed network connection","stacktrace":"go.etcd.io/etcd/server/v3/embed.(*Etcd).errHandler\n\tgo.etcd.io/etcd/server/v3/embed/etcd.go:912\ngo.etcd.io/etcd/server/v3/embed.(*Etcd).startHandler.func1\n\tgo.etcd.io/etcd/server/v3/embed/etcd.go:906"}
	{"level":"warn","ts":"2025-12-13T09:34:38.556757Z","caller":"embed/serve.go:245","msg":"stopping secure grpc server due to error","error":"accept tcp 192.168.50.59:2379: use of closed network connection"}
	{"level":"warn","ts":"2025-12-13T09:34:38.556765Z","caller":"embed/serve.go:247","msg":"stopped secure grpc server due to error","error":"accept tcp 192.168.50.59:2379: use of closed network connection"}
	{"level":"error","ts":"2025-12-13T09:34:38.556770Z","caller":"embed/etcd.go:912","msg":"setting up serving from embedded etcd failed.","error":"accept tcp 192.168.50.59:2379: use of closed network connection","stacktrace":"go.etcd.io/etcd/server/v3/embed.(*Etcd).errHandler\n\tgo.etcd.io/etcd/server/v3/embed/etcd.go:912\ngo.etcd.io/etcd/server/v3/embed.(*Etcd).startHandler.func1\n\tgo.etcd.io/etcd/server/v3/embed/etcd.go:906"}
	{"level":"info","ts":"2025-12-13T09:34:38.562282Z","caller":"embed/etcd.go:621","msg":"stopping serving peer traffic","address":"192.168.50.59:2380"}
	{"level":"error","ts":"2025-12-13T09:34:38.562408Z","caller":"embed/etcd.go:912","msg":"setting up serving from embedded etcd failed.","error":"accept tcp 192.168.50.59:2380: use of closed network connection","stacktrace":"go.etcd.io/etcd/server/v3/embed.(*Etcd).errHandler\n\tgo.etcd.io/etcd/server/v3/embed/etcd.go:912\ngo.etcd.io/etcd/server/v3/embed.(*Etcd).startHandler.func1\n\tgo.etcd.io/etcd/server/v3/embed/etcd.go:906"}
	{"level":"info","ts":"2025-12-13T09:34:38.562438Z","caller":"embed/etcd.go:626","msg":"stopped serving peer traffic","address":"192.168.50.59:2380"}
	{"level":"info","ts":"2025-12-13T09:34:38.562445Z","caller":"embed/etcd.go:428","msg":"closed etcd server","name":"default-k8s-diff-port-018953","data-dir":"/var/lib/minikube/etcd","advertise-peer-urls":["https://192.168.50.59:2380"],"advertise-client-urls":["https://192.168.50.59:2379"]}
	
	
	==> kernel <==
	 09:36:43 up 1 min,  0 users,  load average: 1.22, 0.57, 0.21
	Linux default-k8s-diff-port-018953 6.6.95 #1 SMP PREEMPT_DYNAMIC Thu Dec 11 23:11:39 UTC 2025 x86_64 GNU/Linux
	PRETTY_NAME="Buildroot 2025.02"
	
	
	==> kube-apiserver [1042acfc9b5d] <==
	W1213 09:34:40.622822       1 logging.go:55] [core] [Channel #79 SubChannel #81]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1213 09:34:40.624325       1 logging.go:55] [core] [Channel #71 SubChannel #73]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1213 09:34:40.661871       1 logging.go:55] [core] [Channel #231 SubChannel #233]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1213 09:34:40.672549       1 logging.go:55] [core] [Channel #75 SubChannel #77]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1213 09:34:40.730084       1 logging.go:55] [core] [Channel #47 SubChannel #49]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1213 09:34:40.736885       1 logging.go:55] [core] [Channel #107 SubChannel #109]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1213 09:34:40.756033       1 logging.go:55] [core] [Channel #83 SubChannel #85]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1213 09:34:40.780475       1 logging.go:55] [core] [Channel #43 SubChannel #45]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1213 09:34:40.826049       1 logging.go:55] [core] [Channel #147 SubChannel #149]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1213 09:34:40.837920       1 logging.go:55] [core] [Channel #247 SubChannel #249]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1213 09:34:40.932683       1 logging.go:55] [core] [Channel #4 SubChannel #6]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1213 09:34:40.932769       1 logging.go:55] [core] [Channel #135 SubChannel #137]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1213 09:34:40.952101       1 logging.go:55] [core] [Channel #175 SubChannel #177]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1213 09:34:41.143618       1 logging.go:55] [core] [Channel #187 SubChannel #189]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1213 09:34:41.182743       1 logging.go:55] [core] [Channel #35 SubChannel #37]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1213 09:34:41.196680       1 logging.go:55] [core] [Channel #195 SubChannel #197]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1213 09:34:41.205755       1 logging.go:55] [core] [Channel #91 SubChannel #93]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1213 09:34:41.229454       1 logging.go:55] [core] [Channel #155 SubChannel #157]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1213 09:34:41.233434       1 logging.go:55] [core] [Channel #27 SubChannel #29]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1213 09:34:41.256087       1 logging.go:55] [core] [Channel #207 SubChannel #209]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1213 09:34:41.282714       1 logging.go:55] [core] [Channel #251 SubChannel #253]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1213 09:34:41.303983       1 logging.go:55] [core] [Channel #171 SubChannel #173]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1213 09:34:41.517596       1 logging.go:55] [core] [Channel #20 SubChannel #22]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1213 09:34:41.653186       1 logging.go:55] [core] [Channel #67 SubChannel #69]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1213 09:34:41.672603       1 logging.go:55] [core] [Channel #223 SubChannel #225]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	
	
	==> kube-apiserver [a16e34261234] <==
		, Header: map[Content-Type:[text/plain; charset=utf-8] X-Content-Type-Options:[nosniff]]
	 > logger="UnhandledError"
	I1213 09:35:35.792541       1 controller.go:109] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	W1213 09:35:35.792921       1 handler_proxy.go:99] no RequestInfo found in the context
	E1213 09:35:35.793174       1 controller.go:113] "Unhandled Error" err="loading OpenAPI spec for \"v1beta1.metrics.k8s.io\" failed with: Error, could not get list of group versions for APIService" logger="UnhandledError"
	I1213 09:35:35.794093       1 controller.go:126] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	I1213 09:35:37.097294       1 controller.go:667] quota admission added evaluator for: deployments.apps
	I1213 09:35:37.176751       1 controller.go:667] quota admission added evaluator for: daemonsets.apps
	I1213 09:35:37.227745       1 controller.go:667] quota admission added evaluator for: roles.rbac.authorization.k8s.io
	I1213 09:35:37.272717       1 controller.go:667] quota admission added evaluator for: rolebindings.rbac.authorization.k8s.io
	I1213 09:35:39.196593       1 controller.go:667] quota admission added evaluator for: replicasets.apps
	I1213 09:35:39.355391       1 controller.go:667] quota admission added evaluator for: endpoints
	I1213 09:35:39.420125       1 controller.go:667] quota admission added evaluator for: endpointslices.discovery.k8s.io
	I1213 09:35:40.658511       1 controller.go:667] quota admission added evaluator for: namespaces
	I1213 09:35:41.272672       1 alloc.go:328] "allocated clusterIPs" service="kubernetes-dashboard/kubernetes-dashboard" clusterIPs={"IPv4":"10.99.93.220"}
	I1213 09:35:41.301717       1 alloc.go:328] "allocated clusterIPs" service="kubernetes-dashboard/dashboard-metrics-scraper" clusterIPs={"IPv4":"10.99.12.133"}
	W1213 09:36:40.177828       1 handler_proxy.go:99] no RequestInfo found in the context
	E1213 09:36:40.182971       1 controller.go:102] "Unhandled Error" err=<
		loading OpenAPI spec for "v1beta1.metrics.k8s.io" failed with: failed to download v1beta1.metrics.k8s.io: failed to retrieve openAPI spec, http error: ResponseCode: 503, Body: service unavailable
		, Header: map[Content-Type:[text/plain; charset=utf-8] X-Content-Type-Options:[nosniff]]
	 > logger="UnhandledError"
	I1213 09:36:40.185715       1 controller.go:109] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	W1213 09:36:40.186493       1 handler_proxy.go:99] no RequestInfo found in the context
	E1213 09:36:40.186530       1 controller.go:113] "Unhandled Error" err="loading OpenAPI spec for \"v1beta1.metrics.k8s.io\" failed with: Error, could not get list of group versions for APIService" logger="UnhandledError"
	I1213 09:36:40.186807       1 controller.go:126] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	
	
	==> kube-controller-manager [2f91314a9499] <==
	I1213 09:33:40.950295       1 shared_informer.go:356] "Caches are synced" controller="ReplicationController"
	I1213 09:33:40.956871       1 shared_informer.go:356] "Caches are synced" controller="garbage collector"
	I1213 09:33:40.958115       1 shared_informer.go:356] "Caches are synced" controller="HPA"
	I1213 09:33:40.958323       1 shared_informer.go:356] "Caches are synced" controller="VAC protection"
	I1213 09:33:40.966170       1 shared_informer.go:356] "Caches are synced" controller="ReplicaSet"
	I1213 09:33:40.966183       1 shared_informer.go:356] "Caches are synced" controller="certificate-csrapproving"
	I1213 09:33:40.966391       1 shared_informer.go:356] "Caches are synced" controller="TTL after finished"
	I1213 09:33:40.967667       1 shared_informer.go:356] "Caches are synced" controller="ephemeral"
	I1213 09:33:40.968920       1 shared_informer.go:356] "Caches are synced" controller="legacy-service-account-token-cleaner"
	I1213 09:33:40.968953       1 shared_informer.go:356] "Caches are synced" controller="endpoint"
	I1213 09:33:40.972144       1 shared_informer.go:356] "Caches are synced" controller="expand"
	I1213 09:33:40.972199       1 shared_informer.go:356] "Caches are synced" controller="service account"
	I1213 09:33:40.972742       1 shared_informer.go:356] "Caches are synced" controller="disruption"
	I1213 09:33:40.977153       1 shared_informer.go:356] "Caches are synced" controller="taint-eviction-controller"
	I1213 09:33:40.977253       1 shared_informer.go:356] "Caches are synced" controller="attach detach"
	I1213 09:33:40.979526       1 shared_informer.go:356] "Caches are synced" controller="PVC protection"
	I1213 09:33:40.982992       1 shared_informer.go:356] "Caches are synced" controller="resource quota"
	I1213 09:33:40.983081       1 shared_informer.go:356] "Caches are synced" controller="GC"
	I1213 09:33:40.998693       1 shared_informer.go:356] "Caches are synced" controller="node"
	I1213 09:33:40.999070       1 shared_informer.go:356] "Caches are synced" controller="namespace"
	I1213 09:33:40.999177       1 range_allocator.go:177] "Sending events to api server" logger="node-ipam-controller"
	I1213 09:33:40.999329       1 range_allocator.go:183] "Starting range CIDR allocator" logger="node-ipam-controller"
	I1213 09:33:40.999443       1 shared_informer.go:349] "Waiting for caches to sync" controller="cidrallocator"
	I1213 09:33:40.999453       1 shared_informer.go:356] "Caches are synced" controller="cidrallocator"
	I1213 09:33:41.012729       1 range_allocator.go:428] "Set node PodCIDR" logger="node-ipam-controller" node="default-k8s-diff-port-018953" podCIDRs=["10.244.0.0/24"]
	
	
	==> kube-controller-manager [ac89bd32a623] <==
	I1213 09:35:39.162655       1 shared_informer.go:349] "Waiting for caches to sync" controller="garbage collector"
	I1213 09:35:39.162976       1 shared_informer.go:356] "Caches are synced" controller="resource quota"
	I1213 09:35:39.171295       1 shared_informer.go:356] "Caches are synced" controller="validatingadmissionpolicy-status"
	I1213 09:35:39.174249       1 shared_informer.go:356] "Caches are synced" controller="legacy-service-account-token-cleaner"
	I1213 09:35:39.174268       1 shared_informer.go:356] "Caches are synced" controller="persistent volume"
	I1213 09:35:39.179131       1 shared_informer.go:356] "Caches are synced" controller="TTL"
	I1213 09:35:39.184147       1 shared_informer.go:356] "Caches are synced" controller="PVC protection"
	I1213 09:35:39.186918       1 shared_informer.go:356] "Caches are synced" controller="ClusterRoleAggregator"
	I1213 09:35:39.204596       1 shared_informer.go:356] "Caches are synced" controller="resource quota"
	I1213 09:35:39.204652       1 shared_informer.go:356] "Caches are synced" controller="job"
	I1213 09:35:39.207228       1 shared_informer.go:356] "Caches are synced" controller="deployment"
	I1213 09:35:39.225600       1 shared_informer.go:356] "Caches are synced" controller="daemon sets"
	I1213 09:35:39.405213       1 shared_informer.go:356] "Caches are synced" controller="garbage collector"
	I1213 09:35:39.405236       1 garbagecollector.go:154] "Garbage collector: all resource monitors have synced" logger="garbage-collector-controller"
	I1213 09:35:39.405243       1 garbagecollector.go:157] "Proceeding to collect garbage" logger="garbage-collector-controller"
	I1213 09:35:39.463258       1 shared_informer.go:356] "Caches are synced" controller="garbage collector"
	E1213 09:35:40.828492       1 replica_set.go:587] "Unhandled Error" err="sync \"kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9\" failed with pods \"dashboard-metrics-scraper-6ffb444bf9-\" is forbidden: error looking up service account kubernetes-dashboard/kubernetes-dashboard: serviceaccount \"kubernetes-dashboard\" not found" logger="UnhandledError"
	E1213 09:35:40.868395       1 replica_set.go:587] "Unhandled Error" err="sync \"kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9\" failed with pods \"dashboard-metrics-scraper-6ffb444bf9-\" is forbidden: error looking up service account kubernetes-dashboard/kubernetes-dashboard: serviceaccount \"kubernetes-dashboard\" not found" logger="UnhandledError"
	E1213 09:35:40.920353       1 replica_set.go:587] "Unhandled Error" err="sync \"kubernetes-dashboard/kubernetes-dashboard-855c9754f9\" failed with pods \"kubernetes-dashboard-855c9754f9-\" is forbidden: error looking up service account kubernetes-dashboard/kubernetes-dashboard: serviceaccount \"kubernetes-dashboard\" not found" logger="UnhandledError"
	E1213 09:35:40.927772       1 replica_set.go:587] "Unhandled Error" err="sync \"kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9\" failed with pods \"dashboard-metrics-scraper-6ffb444bf9-\" is forbidden: error looking up service account kubernetes-dashboard/kubernetes-dashboard: serviceaccount \"kubernetes-dashboard\" not found" logger="UnhandledError"
	E1213 09:35:40.954597       1 replica_set.go:587] "Unhandled Error" err="sync \"kubernetes-dashboard/kubernetes-dashboard-855c9754f9\" failed with pods \"kubernetes-dashboard-855c9754f9-\" is forbidden: error looking up service account kubernetes-dashboard/kubernetes-dashboard: serviceaccount \"kubernetes-dashboard\" not found" logger="UnhandledError"
	E1213 09:35:41.010644       1 replica_set.go:587] "Unhandled Error" err="sync \"kubernetes-dashboard/kubernetes-dashboard-855c9754f9\" failed with pods \"kubernetes-dashboard-855c9754f9-\" is forbidden: error looking up service account kubernetes-dashboard/kubernetes-dashboard: serviceaccount \"kubernetes-dashboard\" not found" logger="UnhandledError"
	E1213 09:35:41.011188       1 replica_set.go:587] "Unhandled Error" err="sync \"kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9\" failed with pods \"dashboard-metrics-scraper-6ffb444bf9-\" is forbidden: error looking up service account kubernetes-dashboard/kubernetes-dashboard: serviceaccount \"kubernetes-dashboard\" not found" logger="UnhandledError"
	I1213 09:36:40.263544       1 garbagecollector.go:787] "failed to discover some groups" logger="garbage-collector-controller" groups="map[\"metrics.k8s.io/v1beta1\":\"stale GroupVersion discovery: metrics.k8s.io/v1beta1\"]"
	E1213 09:36:40.276548       1 resource_quota_controller.go:446] "Unhandled Error" err="unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1" logger="UnhandledError"
	
	
	==> kube-proxy [dca507ceebdb] <==
	I1213 09:35:37.040949       1 shared_informer.go:349] "Waiting for caches to sync" controller="node informer cache"
	I1213 09:35:37.141659       1 shared_informer.go:356] "Caches are synced" controller="node informer cache"
	I1213 09:35:37.144469       1 server.go:219] "Successfully retrieved NodeIPs" NodeIPs=["192.168.50.59"]
	E1213 09:35:37.146002       1 server.go:256] "Kube-proxy configuration may be incomplete or incorrect" err="nodePortAddresses is unset; NodePort connections will be accepted on all local IPs. Consider using `--nodeport-addresses primary`"
	I1213 09:35:37.358849       1 server_linux.go:103] "No iptables support for family" ipFamily="IPv6" error=<
		error listing chain "POSTROUTING" in table "nat": exit status 3: ip6tables v1.8.9 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
		Perhaps ip6tables or your kernel needs to be upgraded.
	 >
	I1213 09:35:37.358979       1 server.go:267] "kube-proxy running in single-stack mode" ipFamily="IPv4"
	I1213 09:35:37.360125       1 server_linux.go:132] "Using iptables Proxier"
	I1213 09:35:37.387908       1 proxier.go:242] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses" ipFamily="IPv4"
	I1213 09:35:37.391056       1 server.go:527] "Version info" version="v1.34.2"
	I1213 09:35:37.391095       1 server.go:529] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1213 09:35:37.403617       1 config.go:200] "Starting service config controller"
	I1213 09:35:37.403649       1 shared_informer.go:349] "Waiting for caches to sync" controller="service config"
	I1213 09:35:37.404430       1 config.go:106] "Starting endpoint slice config controller"
	I1213 09:35:37.404450       1 shared_informer.go:349] "Waiting for caches to sync" controller="endpoint slice config"
	I1213 09:35:37.404570       1 config.go:403] "Starting serviceCIDR config controller"
	I1213 09:35:37.404575       1 shared_informer.go:349] "Waiting for caches to sync" controller="serviceCIDR config"
	I1213 09:35:37.418884       1 config.go:309] "Starting node config controller"
	I1213 09:35:37.425706       1 shared_informer.go:349] "Waiting for caches to sync" controller="node config"
	I1213 09:35:37.425925       1 shared_informer.go:356] "Caches are synced" controller="node config"
	I1213 09:35:37.504639       1 shared_informer.go:356] "Caches are synced" controller="serviceCIDR config"
	I1213 09:35:37.504639       1 shared_informer.go:356] "Caches are synced" controller="service config"
	I1213 09:35:37.504659       1 shared_informer.go:356] "Caches are synced" controller="endpoint slice config"
	
	
	==> kube-proxy [dd7e0f10d15a] <==
	I1213 09:33:43.815907       1 shared_informer.go:349] "Waiting for caches to sync" controller="node informer cache"
	I1213 09:33:43.916581       1 shared_informer.go:356] "Caches are synced" controller="node informer cache"
	I1213 09:33:43.916634       1 server.go:219] "Successfully retrieved NodeIPs" NodeIPs=["192.168.50.59"]
	E1213 09:33:43.916761       1 server.go:256] "Kube-proxy configuration may be incomplete or incorrect" err="nodePortAddresses is unset; NodePort connections will be accepted on all local IPs. Consider using `--nodeport-addresses primary`"
	I1213 09:33:44.084469       1 server_linux.go:103] "No iptables support for family" ipFamily="IPv6" error=<
		error listing chain "POSTROUTING" in table "nat": exit status 3: ip6tables v1.8.9 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
		Perhaps ip6tables or your kernel needs to be upgraded.
	 >
	I1213 09:33:44.084804       1 server.go:267] "kube-proxy running in single-stack mode" ipFamily="IPv4"
	I1213 09:33:44.085051       1 server_linux.go:132] "Using iptables Proxier"
	I1213 09:33:44.146236       1 proxier.go:242] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses" ipFamily="IPv4"
	I1213 09:33:44.148849       1 server.go:527] "Version info" version="v1.34.2"
	I1213 09:33:44.148893       1 server.go:529] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1213 09:33:44.174699       1 config.go:200] "Starting service config controller"
	I1213 09:33:44.186845       1 shared_informer.go:349] "Waiting for caches to sync" controller="service config"
	I1213 09:33:44.177011       1 config.go:403] "Starting serviceCIDR config controller"
	I1213 09:33:44.199241       1 shared_informer.go:349] "Waiting for caches to sync" controller="serviceCIDR config"
	I1213 09:33:44.177199       1 config.go:309] "Starting node config controller"
	I1213 09:33:44.199731       1 shared_informer.go:349] "Waiting for caches to sync" controller="node config"
	I1213 09:33:44.199742       1 shared_informer.go:356] "Caches are synced" controller="node config"
	I1213 09:33:44.176999       1 config.go:106] "Starting endpoint slice config controller"
	I1213 09:33:44.199753       1 shared_informer.go:349] "Waiting for caches to sync" controller="endpoint slice config"
	I1213 09:33:44.300748       1 shared_informer.go:356] "Caches are synced" controller="endpoint slice config"
	I1213 09:33:44.300859       1 shared_informer.go:356] "Caches are synced" controller="serviceCIDR config"
	I1213 09:33:44.324653       1 shared_informer.go:356] "Caches are synced" controller="service config"
	
	
	==> kube-scheduler [914937023d7a] <==
	I1213 09:35:33.551899       1 serving.go:386] Generated self-signed cert in-memory
	I1213 09:35:35.483371       1 server.go:175] "Starting Kubernetes Scheduler" version="v1.34.2"
	I1213 09:35:35.483423       1 server.go:177] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1213 09:35:35.542688       1 requestheader_controller.go:180] Starting RequestHeaderAuthRequestController
	I1213 09:35:35.543148       1 shared_informer.go:349] "Waiting for caches to sync" controller="RequestHeaderAuthRequestController"
	I1213 09:35:35.543463       1 configmap_cafile_content.go:205] "Starting controller" name="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I1213 09:35:35.544496       1 shared_informer.go:349] "Waiting for caches to sync" controller="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I1213 09:35:35.544750       1 configmap_cafile_content.go:205] "Starting controller" name="client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file"
	I1213 09:35:35.544814       1 shared_informer.go:349] "Waiting for caches to sync" controller="client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file"
	I1213 09:35:35.556538       1 secure_serving.go:211] Serving securely on 127.0.0.1:10259
	I1213 09:35:35.557833       1 tlsconfig.go:243] "Starting DynamicServingCertificateController"
	I1213 09:35:35.647442       1 shared_informer.go:356] "Caches are synced" controller="client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file"
	I1213 09:35:35.683573       1 shared_informer.go:356] "Caches are synced" controller="RequestHeaderAuthRequestController"
	I1213 09:35:35.683720       1 shared_informer.go:356] "Caches are synced" controller="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	
	
	==> kube-scheduler [b6f1305f6ae8] <==
	E1213 09:33:33.891539       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Pod: pods is forbidden: User \"system:kube-scheduler\" cannot list resource \"pods\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Pod"
	E1213 09:33:33.934952       1 reflector.go:205] "Failed to watch" err="failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"storageclasses\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.StorageClass"
	E1213 09:33:33.941462       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User \"system:kube-scheduler\" cannot list resource \"persistentvolumes\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.PersistentVolume"
	E1213 09:33:34.097604       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User \"system:kube-scheduler\" cannot list resource \"persistentvolumeclaims\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.PersistentVolumeClaim"
	E1213 09:33:34.133283       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User \"system:kube-scheduler\" cannot list resource \"replicationcontrollers\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ReplicationController"
	E1213 09:33:34.228471       1 reflector.go:205] "Failed to watch" err="failed to list *v1.DeviceClass: deviceclasses.resource.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"deviceclasses\" in API group \"resource.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.DeviceClass"
	E1213 09:33:34.295819       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Namespace: namespaces is forbidden: User \"system:kube-scheduler\" cannot list resource \"namespaces\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Namespace"
	E1213 09:33:34.316845       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User \"system:kube-scheduler\" cannot list resource \"replicasets\" in API group \"apps\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ReplicaSet"
	E1213 09:33:34.398572       1 reflector.go:205] "Failed to watch" err="failed to list *v1.VolumeAttachment: volumeattachments.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"volumeattachments\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.VolumeAttachment"
	E1213 09:33:34.424661       1 reflector.go:205] "Failed to watch" err="failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csistoragecapacities\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIStorageCapacity"
	E1213 09:33:34.433706       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ResourceSlice: resourceslices.resource.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"resourceslices\" in API group \"resource.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ResourceSlice"
	E1213 09:33:34.466980       1 reflector.go:205] "Failed to watch" err="failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User \"system:kube-scheduler\" cannot list resource \"statefulsets\" in API group \"apps\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.StatefulSet"
	E1213 09:33:34.520494       1 reflector.go:205] "Failed to watch" err="failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csidrivers\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIDriver"
	E1213 09:33:34.524204       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ResourceClaim: resourceclaims.resource.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"resourceclaims\" in API group \"resource.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ResourceClaim"
	E1213 09:33:34.564953       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Node: nodes is forbidden: User \"system:kube-scheduler\" cannot list resource \"nodes\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Node"
	E1213 09:33:34.599575       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ConfigMap: configmaps \"extension-apiserver-authentication\" is forbidden: User \"system:kube-scheduler\" cannot list resource \"configmaps\" in API group \"\" in the namespace \"kube-system\"" logger="UnhandledError" reflector="runtime/asm_amd64.s:1700" type="*v1.ConfigMap"
	E1213 09:33:34.603627       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User \"system:kube-scheduler\" cannot list resource \"poddisruptionbudgets\" in API group \"policy\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.PodDisruptionBudget"
	E1213 09:33:34.662432       1 reflector.go:205] "Failed to watch" err="failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csinodes\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSINode"
	I1213 09:33:36.662642       1 shared_informer.go:356] "Caches are synced" controller="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I1213 09:34:31.600635       1 secure_serving.go:259] Stopped listening on 127.0.0.1:10259
	I1213 09:34:31.600665       1 server.go:263] "[graceful-termination] secure server has stopped listening"
	I1213 09:34:31.600678       1 tlsconfig.go:258] "Shutting down DynamicServingCertificateController"
	I1213 09:34:31.600709       1 configmap_cafile_content.go:226] "Shutting down controller" name="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I1213 09:34:31.601379       1 server.go:265] "[graceful-termination] secure server is exiting"
	E1213 09:34:31.606769       1 run.go:72] "command failed" err="finished without leader elect"
	
	
	==> kubelet <==
	Dec 13 09:36:41 default-k8s-diff-port-018953 kubelet[4247]: I1213 09:36:41.693509    4247 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="c56aa66442294d95cdc779db499f73a6a65c5d8343714f4138a7c6139d20be84"
	Dec 13 09:36:41 default-k8s-diff-port-018953 kubelet[4247]: I1213 09:36:41.694396    4247 kubelet.go:3220] "Creating a mirror pod for static pod" pod="kube-system/kube-apiserver-default-k8s-diff-port-018953"
	Dec 13 09:36:41 default-k8s-diff-port-018953 kubelet[4247]: I1213 09:36:41.746098    4247 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="fd18283ce6f4da3d099da743bad1acd3bbd473f1ee32642ec5b01ea1094721a2"
	Dec 13 09:36:41 default-k8s-diff-port-018953 kubelet[4247]: I1213 09:36:41.863087    4247 kubelet_node_status.go:124] "Node was previously registered" node="default-k8s-diff-port-018953"
	Dec 13 09:36:41 default-k8s-diff-port-018953 kubelet[4247]: I1213 09:36:41.863449    4247 kubelet_node_status.go:78] "Successfully registered node" node="default-k8s-diff-port-018953"
	Dec 13 09:36:41 default-k8s-diff-port-018953 kubelet[4247]: I1213 09:36:41.863559    4247 kuberuntime_manager.go:1828] "Updating runtime config through cri with podcidr" CIDR="10.244.0.0/24"
	Dec 13 09:36:41 default-k8s-diff-port-018953 kubelet[4247]: I1213 09:36:41.865734    4247 kubelet_network.go:47] "Updating Pod CIDR" originalPodCIDR="" newPodCIDR="10.244.0.0/24"
	Dec 13 09:36:41 default-k8s-diff-port-018953 kubelet[4247]: E1213 09:36:41.869479    4247 kubelet.go:3222] "Failed creating a mirror pod" err="pods \"kube-controller-manager-default-k8s-diff-port-018953\" already exists" pod="kube-system/kube-controller-manager-default-k8s-diff-port-018953"
	Dec 13 09:36:41 default-k8s-diff-port-018953 kubelet[4247]: I1213 09:36:41.910589    4247 apiserver.go:52] "Watching apiserver"
	Dec 13 09:36:41 default-k8s-diff-port-018953 kubelet[4247]: E1213 09:36:41.939474    4247 kubelet.go:3222] "Failed creating a mirror pod" err="pods \"etcd-default-k8s-diff-port-018953\" already exists" pod="kube-system/etcd-default-k8s-diff-port-018953"
	Dec 13 09:36:41 default-k8s-diff-port-018953 kubelet[4247]: E1213 09:36:41.939595    4247 kubelet.go:3222] "Failed creating a mirror pod" err="pods \"kube-scheduler-default-k8s-diff-port-018953\" already exists" pod="kube-system/kube-scheduler-default-k8s-diff-port-018953"
	Dec 13 09:36:41 default-k8s-diff-port-018953 kubelet[4247]: E1213 09:36:41.940658    4247 kubelet.go:3222] "Failed creating a mirror pod" err="pods \"kube-apiserver-default-k8s-diff-port-018953\" already exists" pod="kube-system/kube-apiserver-default-k8s-diff-port-018953"
	Dec 13 09:36:41 default-k8s-diff-port-018953 kubelet[4247]: I1213 09:36:41.997570    4247 desired_state_of_world_populator.go:154] "Finished populating initial desired state of world"
	Dec 13 09:36:42 default-k8s-diff-port-018953 kubelet[4247]: I1213 09:36:42.063317    4247 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tmp\" (UniqueName: \"kubernetes.io/host-path/4f04d783-316c-46b9-af1b-892240189979-tmp\") pod \"storage-provisioner\" (UID: \"4f04d783-316c-46b9-af1b-892240189979\") " pod="kube-system/storage-provisioner"
	Dec 13 09:36:42 default-k8s-diff-port-018953 kubelet[4247]: I1213 09:36:42.063494    4247 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/222e259f-13b1-4cc0-b420-fb9f4c871473-xtables-lock\") pod \"kube-proxy-bjk4k\" (UID: \"222e259f-13b1-4cc0-b420-fb9f4c871473\") " pod="kube-system/kube-proxy-bjk4k"
	Dec 13 09:36:42 default-k8s-diff-port-018953 kubelet[4247]: I1213 09:36:42.063533    4247 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/222e259f-13b1-4cc0-b420-fb9f4c871473-lib-modules\") pod \"kube-proxy-bjk4k\" (UID: \"222e259f-13b1-4cc0-b420-fb9f4c871473\") " pod="kube-system/kube-proxy-bjk4k"
	Dec 13 09:36:42 default-k8s-diff-port-018953 kubelet[4247]: I1213 09:36:42.224982    4247 scope.go:117] "RemoveContainer" containerID="aae5aab2e50bdc9408b701e65fed0ce7ed1adc3ead912c1e8d8038523e63d827"
	Dec 13 09:36:42 default-k8s-diff-port-018953 kubelet[4247]: E1213 09:36:42.547075    4247 log.go:32] "PullImage from image service failed" err="rpc error: code = Unknown desc = Docker Image Format v1 and Docker Image manifest version 2, schema 1 support has been removed. Suggest the author of registry.k8s.io/echoserver:1.4 to upgrade the image to the OCI Format or Docker Image manifest v2, schema 2. More information at https://docs.docker.com/go/deprecated-image-specs/" image="registry.k8s.io/echoserver:1.4"
	Dec 13 09:36:42 default-k8s-diff-port-018953 kubelet[4247]: E1213 09:36:42.547146    4247 kuberuntime_image.go:43] "Failed to pull image" err="Docker Image Format v1 and Docker Image manifest version 2, schema 1 support has been removed. Suggest the author of registry.k8s.io/echoserver:1.4 to upgrade the image to the OCI Format or Docker Image manifest v2, schema 2. More information at https://docs.docker.com/go/deprecated-image-specs/" image="registry.k8s.io/echoserver:1.4"
	Dec 13 09:36:42 default-k8s-diff-port-018953 kubelet[4247]: E1213 09:36:42.547406    4247 kuberuntime_manager.go:1449] "Unhandled Error" err="container dashboard-metrics-scraper start failed in pod dashboard-metrics-scraper-6ffb444bf9-trhxx_kubernetes-dashboard(def61142-6626-499a-b752-60ee3640ae87): ErrImagePull: Docker Image Format v1 and Docker Image manifest version 2, schema 1 support has been removed. Suggest the author of registry.k8s.io/echoserver:1.4 to upgrade the image to the OCI Format or Docker Image manifest v2, schema 2. More information at https://docs.docker.com/go/deprecated-image-specs/" logger="UnhandledError"
	Dec 13 09:36:42 default-k8s-diff-port-018953 kubelet[4247]: E1213 09:36:42.547486    4247 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with ErrImagePull: \"Docker Image Format v1 and Docker Image manifest version 2, schema 1 support has been removed. Suggest the author of registry.k8s.io/echoserver:1.4 to upgrade the image to the OCI Format or Docker Image manifest v2, schema 2. More information at https://docs.docker.com/go/deprecated-image-specs/\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-trhxx" podUID="def61142-6626-499a-b752-60ee3640ae87"
	Dec 13 09:36:42 default-k8s-diff-port-018953 kubelet[4247]: E1213 09:36:42.583705    4247 log.go:32] "PullImage from image service failed" err="rpc error: code = Unknown desc = Error response from daemon: Get \"https://fake.domain/v2/\": dial tcp: lookup fake.domain on 192.168.122.1:53: no such host" image="fake.domain/registry.k8s.io/echoserver:1.4"
	Dec 13 09:36:42 default-k8s-diff-port-018953 kubelet[4247]: E1213 09:36:42.583751    4247 kuberuntime_image.go:43] "Failed to pull image" err="Error response from daemon: Get \"https://fake.domain/v2/\": dial tcp: lookup fake.domain on 192.168.122.1:53: no such host" image="fake.domain/registry.k8s.io/echoserver:1.4"
	Dec 13 09:36:42 default-k8s-diff-port-018953 kubelet[4247]: E1213 09:36:42.583835    4247 kuberuntime_manager.go:1449] "Unhandled Error" err="container metrics-server start failed in pod metrics-server-746fcd58dc-lbqfl_kube-system(c4360498-5419-48e2-994c-87efe5f4c20f): ErrImagePull: Error response from daemon: Get \"https://fake.domain/v2/\": dial tcp: lookup fake.domain on 192.168.122.1:53: no such host" logger="UnhandledError"
	Dec 13 09:36:42 default-k8s-diff-port-018953 kubelet[4247]: E1213 09:36:42.583864    4247 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ErrImagePull: \"Error response from daemon: Get \\\"https://fake.domain/v2/\\\": dial tcp: lookup fake.domain on 192.168.122.1:53: no such host\"" pod="kube-system/metrics-server-746fcd58dc-lbqfl" podUID="c4360498-5419-48e2-994c-87efe5f4c20f"
	
	
	==> kubernetes-dashboard [9113189d0c04] <==
	2025/12/13 09:35:53 Using namespace: kubernetes-dashboard
	2025/12/13 09:35:53 Using in-cluster config to connect to apiserver
	2025/12/13 09:35:53 Using secret token for csrf signing
	2025/12/13 09:35:53 Initializing csrf token from kubernetes-dashboard-csrf secret
	2025/12/13 09:35:53 Empty token. Generating and storing in a secret kubernetes-dashboard-csrf
	2025/12/13 09:35:53 Successful initial request to the apiserver, version: v1.34.2
	2025/12/13 09:35:53 Generating JWE encryption key
	2025/12/13 09:35:53 New synchronizer has been registered: kubernetes-dashboard-key-holder-kubernetes-dashboard. Starting
	2025/12/13 09:35:53 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kubernetes-dashboard
	2025/12/13 09:35:53 Initializing JWE encryption key from synchronized object
	2025/12/13 09:35:53 Creating in-cluster Sidecar client
	2025/12/13 09:35:53 Metric client health check failed: the server is currently unable to handle the request (get services dashboard-metrics-scraper). Retrying in 30 seconds.
	2025/12/13 09:35:53 Serving insecurely on HTTP port: 9090
	2025/12/13 09:36:40 Metric client health check failed: the server is currently unable to handle the request (get services dashboard-metrics-scraper). Retrying in 30 seconds.
	2025/12/13 09:35:53 Starting overwatch
	
	
	==> storage-provisioner [a17e1cb8b132] <==
	I1213 09:36:42.715501       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	I1213 09:36:42.736952       1 storage_provisioner.go:141] Storage provisioner initialized, now starting service!
	I1213 09:36:42.737511       1 leaderelection.go:243] attempting to acquire leader lease kube-system/k8s.io-minikube-hostpath...
	W1213 09:36:42.741470       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	
	
	==> storage-provisioner [aae5aab2e50b] <==
	I1213 09:35:36.713132       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	F1213 09:36:06.734141       1 main.go:39] error getting server version: Get "https://10.96.0.1:443/version?timeout=32s": dial tcp 10.96.0.1:443: i/o timeout
	

                                                
                                                
-- /stdout --
helpers_test.go:263: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p default-k8s-diff-port-018953 -n default-k8s-diff-port-018953
helpers_test.go:270: (dbg) Run:  kubectl --context default-k8s-diff-port-018953 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:281: non-running pods: metrics-server-746fcd58dc-lbqfl dashboard-metrics-scraper-6ffb444bf9-trhxx
helpers_test.go:283: ======> post-mortem[TestStartStop/group/default-k8s-diff-port/serial/Pause]: describe non-running pods <======
helpers_test.go:286: (dbg) Run:  kubectl --context default-k8s-diff-port-018953 describe pod metrics-server-746fcd58dc-lbqfl dashboard-metrics-scraper-6ffb444bf9-trhxx
E1213 09:36:44.765823   13307 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22128-9390/.minikube/profiles/no-preload-616969/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
helpers_test.go:286: (dbg) Non-zero exit: kubectl --context default-k8s-diff-port-018953 describe pod metrics-server-746fcd58dc-lbqfl dashboard-metrics-scraper-6ffb444bf9-trhxx: exit status 1 (86.128132ms)

                                                
                                                
** stderr ** 
	Error from server (NotFound): pods "metrics-server-746fcd58dc-lbqfl" not found
	Error from server (NotFound): pods "dashboard-metrics-scraper-6ffb444bf9-trhxx" not found

                                                
                                                
** /stderr **
helpers_test.go:288: kubectl --context default-k8s-diff-port-018953 describe pod metrics-server-746fcd58dc-lbqfl dashboard-metrics-scraper-6ffb444bf9-trhxx: exit status 1
helpers_test.go:223: -----------------------post-mortem--------------------------------
helpers_test.go:224: ======>  post-mortem[TestStartStop/group/default-k8s-diff-port/serial/Pause]: network settings <======
helpers_test.go:231: HOST ENV snapshots: PROXY env: HTTP_PROXY="<empty>" HTTPS_PROXY="<empty>" NO_PROXY="<empty>"
helpers_test.go:248: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p default-k8s-diff-port-018953 -n default-k8s-diff-port-018953
helpers_test.go:253: <<< TestStartStop/group/default-k8s-diff-port/serial/Pause FAILED: start of post-mortem logs <<<
helpers_test.go:254: ======>  post-mortem[TestStartStop/group/default-k8s-diff-port/serial/Pause]: minikube logs <======
helpers_test.go:256: (dbg) Run:  out/minikube-linux-amd64 -p default-k8s-diff-port-018953 logs -n 25
helpers_test.go:256: (dbg) Done: out/minikube-linux-amd64 -p default-k8s-diff-port-018953 logs -n 25: (1.582712952s)
helpers_test.go:261: TestStartStop/group/default-k8s-diff-port/serial/Pause logs: 
-- stdout --
	
	==> Audit <==
	┌─────────┬──────────────────────────────────────────────────────────────────────────────────────────────────────────┬──────────────────────────────┬─────────┬─────────┬─────────────────────┬─────────────────────┐
	│ COMMAND │                                                   ARGS                                                   │           PROFILE            │  USER   │ VERSION │     START TIME      │      END TIME       │
	├─────────┼──────────────────────────────────────────────────────────────────────────────────────────────────────────┼──────────────────────────────┼─────────┼─────────┼─────────────────────┼─────────────────────┤
	│ ssh     │ -p kindnet-949855 sudo systemctl cat kubelet --no-pager                                                  │ kindnet-949855               │ jenkins │ v1.37.0 │ 13 Dec 25 09:36 UTC │ 13 Dec 25 09:36 UTC │
	│ ssh     │ -p kindnet-949855 sudo journalctl -xeu kubelet --all --full --no-pager                                   │ kindnet-949855               │ jenkins │ v1.37.0 │ 13 Dec 25 09:36 UTC │ 13 Dec 25 09:36 UTC │
	│ ssh     │ -p kindnet-949855 sudo cat /etc/kubernetes/kubelet.conf                                                  │ kindnet-949855               │ jenkins │ v1.37.0 │ 13 Dec 25 09:36 UTC │ 13 Dec 25 09:36 UTC │
	│ ssh     │ -p kindnet-949855 sudo cat /var/lib/kubelet/config.yaml                                                  │ kindnet-949855               │ jenkins │ v1.37.0 │ 13 Dec 25 09:36 UTC │ 13 Dec 25 09:36 UTC │
	│ ssh     │ -p kindnet-949855 sudo systemctl status docker --all --full --no-pager                                   │ kindnet-949855               │ jenkins │ v1.37.0 │ 13 Dec 25 09:36 UTC │ 13 Dec 25 09:36 UTC │
	│ ssh     │ -p kindnet-949855 sudo systemctl cat docker --no-pager                                                   │ kindnet-949855               │ jenkins │ v1.37.0 │ 13 Dec 25 09:36 UTC │ 13 Dec 25 09:36 UTC │
	│ ssh     │ -p kindnet-949855 sudo cat /etc/docker/daemon.json                                                       │ kindnet-949855               │ jenkins │ v1.37.0 │ 13 Dec 25 09:36 UTC │ 13 Dec 25 09:36 UTC │
	│ ssh     │ -p kindnet-949855 sudo docker system info                                                                │ kindnet-949855               │ jenkins │ v1.37.0 │ 13 Dec 25 09:36 UTC │ 13 Dec 25 09:36 UTC │
	│ ssh     │ -p kindnet-949855 sudo systemctl status cri-docker --all --full --no-pager                               │ kindnet-949855               │ jenkins │ v1.37.0 │ 13 Dec 25 09:36 UTC │ 13 Dec 25 09:36 UTC │
	│ ssh     │ -p kindnet-949855 sudo systemctl cat cri-docker --no-pager                                               │ kindnet-949855               │ jenkins │ v1.37.0 │ 13 Dec 25 09:36 UTC │ 13 Dec 25 09:36 UTC │
	│ ssh     │ -p kindnet-949855 sudo cat /etc/systemd/system/cri-docker.service.d/10-cni.conf                          │ kindnet-949855               │ jenkins │ v1.37.0 │ 13 Dec 25 09:36 UTC │ 13 Dec 25 09:36 UTC │
	│ ssh     │ -p kindnet-949855 sudo cat /usr/lib/systemd/system/cri-docker.service                                    │ kindnet-949855               │ jenkins │ v1.37.0 │ 13 Dec 25 09:36 UTC │ 13 Dec 25 09:36 UTC │
	│ ssh     │ -p kindnet-949855 sudo cri-dockerd --version                                                             │ kindnet-949855               │ jenkins │ v1.37.0 │ 13 Dec 25 09:36 UTC │ 13 Dec 25 09:36 UTC │
	│ ssh     │ -p kindnet-949855 sudo systemctl status containerd --all --full --no-pager                               │ kindnet-949855               │ jenkins │ v1.37.0 │ 13 Dec 25 09:36 UTC │ 13 Dec 25 09:36 UTC │
	│ ssh     │ -p kindnet-949855 sudo systemctl cat containerd --no-pager                                               │ kindnet-949855               │ jenkins │ v1.37.0 │ 13 Dec 25 09:36 UTC │ 13 Dec 25 09:36 UTC │
	│ ssh     │ -p kindnet-949855 sudo cat /lib/systemd/system/containerd.service                                        │ kindnet-949855               │ jenkins │ v1.37.0 │ 13 Dec 25 09:36 UTC │ 13 Dec 25 09:36 UTC │
	│ ssh     │ -p kindnet-949855 sudo cat /etc/containerd/config.toml                                                   │ kindnet-949855               │ jenkins │ v1.37.0 │ 13 Dec 25 09:36 UTC │ 13 Dec 25 09:36 UTC │
	│ ssh     │ -p kindnet-949855 sudo containerd config dump                                                            │ kindnet-949855               │ jenkins │ v1.37.0 │ 13 Dec 25 09:36 UTC │ 13 Dec 25 09:36 UTC │
	│ ssh     │ -p kindnet-949855 sudo systemctl status crio --all --full --no-pager                                     │ kindnet-949855               │ jenkins │ v1.37.0 │ 13 Dec 25 09:36 UTC │                     │
	│ ssh     │ -p kindnet-949855 sudo systemctl cat crio --no-pager                                                     │ kindnet-949855               │ jenkins │ v1.37.0 │ 13 Dec 25 09:36 UTC │ 13 Dec 25 09:36 UTC │
	│ ssh     │ -p kindnet-949855 sudo find /etc/crio -type f -exec sh -c 'echo {}; cat {}' \;                           │ kindnet-949855               │ jenkins │ v1.37.0 │ 13 Dec 25 09:36 UTC │ 13 Dec 25 09:36 UTC │
	│ ssh     │ -p kindnet-949855 sudo crio config                                                                       │ kindnet-949855               │ jenkins │ v1.37.0 │ 13 Dec 25 09:36 UTC │ 13 Dec 25 09:36 UTC │
	│ delete  │ -p kindnet-949855                                                                                        │ kindnet-949855               │ jenkins │ v1.37.0 │ 13 Dec 25 09:36 UTC │ 13 Dec 25 09:36 UTC │
	│ start   │ -p false-949855 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=false --driver=kvm2 │ false-949855                 │ jenkins │ v1.37.0 │ 13 Dec 25 09:36 UTC │                     │
	│ unpause │ -p default-k8s-diff-port-018953 --alsologtostderr -v=1                                                   │ default-k8s-diff-port-018953 │ jenkins │ v1.37.0 │ 13 Dec 25 09:36 UTC │ 13 Dec 25 09:36 UTC │
	└─────────┴──────────────────────────────────────────────────────────────────────────────────────────────────────────┴──────────────────────────────┴─────────┴─────────┴─────────────────────┴─────────────────────┘
	
	
	==> Last Start <==
	Log file created at: 2025/12/13 09:36:32
	Running on machine: ubuntu-20-agent-3
	Binary: Built with gc go1.25.5 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1213 09:36:32.871102   52766 out.go:360] Setting OutFile to fd 1 ...
	I1213 09:36:32.871413   52766 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1213 09:36:32.871426   52766 out.go:374] Setting ErrFile to fd 2...
	I1213 09:36:32.871430   52766 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1213 09:36:32.871678   52766 root.go:338] Updating PATH: /home/jenkins/minikube-integration/22128-9390/.minikube/bin
	I1213 09:36:32.872268   52766 out.go:368] Setting JSON to false
	I1213 09:36:32.873214   52766 start.go:133] hostinfo: {"hostname":"ubuntu-20-agent-3","uptime":4743,"bootTime":1765613850,"procs":201,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"22.04","kernelVersion":"6.8.0-1045-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I1213 09:36:32.873287   52766 start.go:143] virtualization: kvm guest
	I1213 09:36:32.878612   52766 out.go:179] * [false-949855] minikube v1.37.0 on Ubuntu 22.04 (kvm/amd64)
	I1213 09:36:32.880401   52766 out.go:179]   - MINIKUBE_LOCATION=22128
	I1213 09:36:32.880398   52766 notify.go:221] Checking for updates...
	I1213 09:36:32.883214   52766 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1213 09:36:32.884677   52766 out.go:179]   - KUBECONFIG=/home/jenkins/minikube-integration/22128-9390/kubeconfig
	I1213 09:36:32.886183   52766 out.go:179]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/22128-9390/.minikube
	I1213 09:36:32.887659   52766 out.go:179]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I1213 09:36:32.889041   52766 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I1213 09:36:32.890830   52766 config.go:182] Loaded profile config "calico-949855": Driver=kvm2, ContainerRuntime=docker, KubernetesVersion=v1.34.2
	I1213 09:36:32.890953   52766 config.go:182] Loaded profile config "custom-flannel-949855": Driver=kvm2, ContainerRuntime=docker, KubernetesVersion=v1.34.2
	I1213 09:36:32.891052   52766 config.go:182] Loaded profile config "default-k8s-diff-port-018953": Driver=kvm2, ContainerRuntime=docker, KubernetesVersion=v1.34.2
	I1213 09:36:32.891152   52766 config.go:182] Loaded profile config "guest-566372": Driver=kvm2, ContainerRuntime=docker, KubernetesVersion=v0.0.0
	I1213 09:36:32.891268   52766 driver.go:422] Setting default libvirt URI to qemu:///system
	I1213 09:36:32.933267   52766 out.go:179] * Using the kvm2 driver based on user configuration
	I1213 09:36:32.934640   52766 start.go:309] selected driver: kvm2
	I1213 09:36:32.934662   52766 start.go:927] validating driver "kvm2" against <nil>
	I1213 09:36:32.934694   52766 start.go:938] status for kvm2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1213 09:36:32.935509   52766 start_flags.go:327] no existing cluster config was found, will generate one from the flags 
	I1213 09:36:32.935821   52766 start_flags.go:992] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1213 09:36:32.935848   52766 cni.go:84] Creating CNI manager for "false"
	I1213 09:36:32.935904   52766 start.go:353] cluster config:
	{Name:false-949855 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765275396-22083@sha256:ffa93f7bad1d2c0a7acfa6e97f1eec0e4955680d08c3904e49db297a10f7f89f Memory:3072 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.2 ClusterName:false-949855 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:doc
ker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:false} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.34.2 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GP
Us: AutoPauseInterval:1m0s}
	I1213 09:36:32.936002   52766 iso.go:125] acquiring lock: {Name:mka70bc7358d71723b0212976cce8aaa1cb0bc58 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1213 09:36:32.937757   52766 out.go:179] * Starting "false-949855" primary control-plane node in "false-949855" cluster
	I1213 09:36:32.939092   52766 preload.go:188] Checking if preload exists for k8s version v1.34.2 and runtime docker
	I1213 09:36:32.939139   52766 preload.go:203] Found local preload: /home/jenkins/minikube-integration/22128-9390/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.2-docker-overlay2-amd64.tar.lz4
	I1213 09:36:32.939148   52766 cache.go:65] Caching tarball of preloaded images
	I1213 09:36:32.939290   52766 preload.go:238] Found /home/jenkins/minikube-integration/22128-9390/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.2-docker-overlay2-amd64.tar.lz4 in cache, skipping download
	I1213 09:36:32.939307   52766 cache.go:68] Finished verifying existence of preloaded tar for v1.34.2 on docker
	I1213 09:36:32.939459   52766 profile.go:143] Saving config to /home/jenkins/minikube-integration/22128-9390/.minikube/profiles/false-949855/config.json ...
	I1213 09:36:32.939483   52766 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22128-9390/.minikube/profiles/false-949855/config.json: {Name:mk941445b61a355483a4c1bce31b72c56029b828 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1213 09:36:32.939677   52766 start.go:360] acquireMachinesLock for false-949855: {Name:mk5011dd8641588b44f3b8805193aca1c9f0973f Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I1213 09:36:32.939713   52766 start.go:364] duration metric: took 19.992µs to acquireMachinesLock for "false-949855"
	I1213 09:36:32.939739   52766 start.go:93] Provisioning new machine with config: &{Name:false-949855 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/22101/minikube-v1.37.0-1765481609-22101-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765275396-22083@sha256:ffa93f7bad1d2c0a7acfa6e97f1eec0e4955680d08c3904e49db297a10f7f89f Memory:3072 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kubern
etesVersion:v1.34.2 ClusterName:false-949855 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:false} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.34.2 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryM
irror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.34.2 ContainerRuntime:docker ControlPlane:true Worker:true}
	I1213 09:36:32.939823   52766 start.go:125] createHost starting for "" (driver="kvm2")
	I1213 09:36:29.649364   51535 cni.go:182] applying CNI manifest using /var/lib/minikube/binaries/v1.34.2/kubectl ...
	I1213 09:36:29.649451   51535 ssh_runner.go:195] Run: stat -c "%s %y" /var/tmp/minikube/cni.yaml
	I1213 09:36:29.660454   51535 ssh_runner.go:352] existence check for /var/tmp/minikube/cni.yaml: stat -c "%s %y" /var/tmp/minikube/cni.yaml: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/tmp/minikube/cni.yaml': No such file or directory
	I1213 09:36:29.660488   51535 ssh_runner.go:362] scp testdata/kube-flannel.yaml --> /var/tmp/minikube/cni.yaml (4578 bytes)
	I1213 09:36:29.708443   51535 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.2/kubectl apply --kubeconfig=/var/lib/minikube/kubeconfig -f /var/tmp/minikube/cni.yaml
	I1213 09:36:30.332618   51535 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I1213 09:36:30.332785   51535 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.2/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I1213 09:36:30.332834   51535 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.2/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes custom-flannel-949855 minikube.k8s.io/updated_at=2025_12_13T09_36_30_0700 minikube.k8s.io/version=v1.37.0 minikube.k8s.io/commit=fb16b7642350f383695d44d1e88d7327f6f14453 minikube.k8s.io/name=custom-flannel-949855 minikube.k8s.io/primary=true
	I1213 09:36:30.367507   51535 ops.go:34] apiserver oom_adj: -16
	I1213 09:36:30.660125   51535 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1213 09:36:31.160329   51535 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1213 09:36:31.661176   51535 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1213 09:36:32.160592   51535 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1213 09:36:32.660629   51535 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1213 09:36:33.160679   51535 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1213 09:36:33.660579   51535 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1213 09:36:34.160332   51535 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1213 09:36:34.660723   51535 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1213 09:36:34.872202   51535 kubeadm.go:1114] duration metric: took 4.539474041s to wait for elevateKubeSystemPrivileges
	I1213 09:36:34.872243   51535 kubeadm.go:403] duration metric: took 20.79753519s to StartCluster
	I1213 09:36:34.872265   51535 settings.go:142] acquiring lock: {Name:mk8102caadd7518d766b7222a696a7b7744bf016 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1213 09:36:34.872375   51535 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/22128-9390/kubeconfig
	I1213 09:36:34.874212   51535 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22128-9390/kubeconfig: {Name:mk2a9127c7f784c4f7a3155b56df24ca7e80b70b Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1213 09:36:34.874666   51535 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.2/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml"
	I1213 09:36:34.874688   51535 addons.go:527] enable addons start: toEnable=map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubetail:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I1213 09:36:34.874763   51535 addons.go:70] Setting storage-provisioner=true in profile "custom-flannel-949855"
	I1213 09:36:34.874664   51535 start.go:236] Will wait 15m0s for node &{Name: IP:192.168.83.251 Port:8443 KubernetesVersion:v1.34.2 ContainerRuntime:docker ControlPlane:true Worker:true}
	I1213 09:36:34.874782   51535 addons.go:70] Setting default-storageclass=true in profile "custom-flannel-949855"
	I1213 09:36:34.874805   51535 addons_storage_classes.go:34] enableOrDisableStorageClasses default-storageclass=true on "custom-flannel-949855"
	I1213 09:36:34.874782   51535 addons.go:239] Setting addon storage-provisioner=true in "custom-flannel-949855"
	I1213 09:36:34.874950   51535 host.go:66] Checking if "custom-flannel-949855" exists ...
	I1213 09:36:34.874958   51535 config.go:182] Loaded profile config "custom-flannel-949855": Driver=kvm2, ContainerRuntime=docker, KubernetesVersion=v1.34.2
	I1213 09:36:34.875095   51535 cache.go:107] acquiring lock: {Name:mk42b4b4a968c0b780e9e698938a03f292a350d6 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1213 09:36:34.875213   51535 cache.go:115] /home/jenkins/minikube-integration/22128-9390/.minikube/cache/images/amd64/gcr.io/k8s-minikube/gvisor-addon_2 exists
	I1213 09:36:34.875247   51535 cache.go:96] cache image "gcr.io/k8s-minikube/gvisor-addon:2" -> "/home/jenkins/minikube-integration/22128-9390/.minikube/cache/images/amd64/gcr.io/k8s-minikube/gvisor-addon_2" took 170.734µs
	I1213 09:36:34.875257   51535 cache.go:80] save to tar file gcr.io/k8s-minikube/gvisor-addon:2 -> /home/jenkins/minikube-integration/22128-9390/.minikube/cache/images/amd64/gcr.io/k8s-minikube/gvisor-addon_2 succeeded
	I1213 09:36:34.875271   51535 cache.go:87] Successfully saved all images to host disk.
	I1213 09:36:34.875508   51535 config.go:182] Loaded profile config "custom-flannel-949855": Driver=kvm2, ContainerRuntime=docker, KubernetesVersion=v1.34.2
	I1213 09:36:34.879004   51535 out.go:179] * Verifying Kubernetes components...
	I1213 09:36:34.879972   51535 ssh_runner.go:195] Run: docker images --format {{.Repository}}:{{.Tag}}
	I1213 09:36:34.880404   51535 out.go:179]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I1213 09:36:32.016317   50577 system_pods.go:86] 9 kube-system pods found
	I1213 09:36:32.016402   50577 system_pods.go:89] "calico-kube-controllers-5c676f698c-mn5lt" [d090b034-f709-45a5-ae36-0df9d318f18f] Pending / Ready:ContainersNotReady (containers with unready status: [calico-kube-controllers]) / ContainersReady:ContainersNotReady (containers with unready status: [calico-kube-controllers])
	I1213 09:36:32.016420   50577 system_pods.go:89] "calico-node-ktf92" [3135da63-b9e4-4382-9209-e980d38f7768] Pending / Initialized:ContainersNotInitialized (containers with incomplete status: [ebpf-bootstrap]) / Ready:ContainersNotReady (containers with unready status: [calico-node]) / ContainersReady:ContainersNotReady (containers with unready status: [calico-node])
	I1213 09:36:32.016435   50577 system_pods.go:89] "coredns-66bc5c9577-g9rr9" [8710f185-92d4-4694-bcbf-258da3c3aee3] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1213 09:36:32.016445   50577 system_pods.go:89] "etcd-calico-949855" [cb0247f0-9e7a-4d22-a4e6-db35a7b199ee] Running
	I1213 09:36:32.016452   50577 system_pods.go:89] "kube-apiserver-calico-949855" [43f43361-238a-4edf-acf1-b7563438a8a5] Running
	I1213 09:36:32.016463   50577 system_pods.go:89] "kube-controller-manager-calico-949855" [4a0a3c98-de85-4e66-b0df-c7c124ce9885] Running
	I1213 09:36:32.016475   50577 system_pods.go:89] "kube-proxy-vdgk9" [39b7aee3-6998-48f4-af6b-b629c648731f] Running
	I1213 09:36:32.016481   50577 system_pods.go:89] "kube-scheduler-calico-949855" [5ab00101-de5c-4ff3-a924-7afdd523a0fb] Running
	I1213 09:36:32.016493   50577 system_pods.go:89] "storage-provisioner" [95f85e24-f720-4da8-b6c0-3a3fe90edf56] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I1213 09:36:32.016516   50577 retry.go:31] will retry after 490.807491ms: missing components: kube-dns
	I1213 09:36:32.513558   50577 system_pods.go:86] 9 kube-system pods found
	I1213 09:36:32.513593   50577 system_pods.go:89] "calico-kube-controllers-5c676f698c-mn5lt" [d090b034-f709-45a5-ae36-0df9d318f18f] Pending / Ready:ContainersNotReady (containers with unready status: [calico-kube-controllers]) / ContainersReady:ContainersNotReady (containers with unready status: [calico-kube-controllers])
	I1213 09:36:32.513602   50577 system_pods.go:89] "calico-node-ktf92" [3135da63-b9e4-4382-9209-e980d38f7768] Pending / Initialized:ContainersNotInitialized (containers with incomplete status: [ebpf-bootstrap]) / Ready:ContainersNotReady (containers with unready status: [calico-node]) / ContainersReady:ContainersNotReady (containers with unready status: [calico-node])
	I1213 09:36:32.513609   50577 system_pods.go:89] "coredns-66bc5c9577-g9rr9" [8710f185-92d4-4694-bcbf-258da3c3aee3] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1213 09:36:32.513615   50577 system_pods.go:89] "etcd-calico-949855" [cb0247f0-9e7a-4d22-a4e6-db35a7b199ee] Running
	I1213 09:36:32.513620   50577 system_pods.go:89] "kube-apiserver-calico-949855" [43f43361-238a-4edf-acf1-b7563438a8a5] Running
	I1213 09:36:32.513624   50577 system_pods.go:89] "kube-controller-manager-calico-949855" [4a0a3c98-de85-4e66-b0df-c7c124ce9885] Running
	I1213 09:36:32.513629   50577 system_pods.go:89] "kube-proxy-vdgk9" [39b7aee3-6998-48f4-af6b-b629c648731f] Running
	I1213 09:36:32.513634   50577 system_pods.go:89] "kube-scheduler-calico-949855" [5ab00101-de5c-4ff3-a924-7afdd523a0fb] Running
	I1213 09:36:32.513641   50577 system_pods.go:89] "storage-provisioner" [95f85e24-f720-4da8-b6c0-3a3fe90edf56] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I1213 09:36:32.513658   50577 retry.go:31] will retry after 797.137585ms: missing components: kube-dns
	I1213 09:36:33.321222   50577 system_pods.go:86] 9 kube-system pods found
	I1213 09:36:33.321267   50577 system_pods.go:89] "calico-kube-controllers-5c676f698c-mn5lt" [d090b034-f709-45a5-ae36-0df9d318f18f] Pending / Ready:ContainersNotReady (containers with unready status: [calico-kube-controllers]) / ContainersReady:ContainersNotReady (containers with unready status: [calico-kube-controllers])
	I1213 09:36:33.321279   50577 system_pods.go:89] "calico-node-ktf92" [3135da63-b9e4-4382-9209-e980d38f7768] Pending / Initialized:ContainersNotInitialized (containers with incomplete status: [ebpf-bootstrap]) / Ready:ContainersNotReady (containers with unready status: [calico-node]) / ContainersReady:ContainersNotReady (containers with unready status: [calico-node])
	I1213 09:36:33.321290   50577 system_pods.go:89] "coredns-66bc5c9577-g9rr9" [8710f185-92d4-4694-bcbf-258da3c3aee3] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1213 09:36:33.321297   50577 system_pods.go:89] "etcd-calico-949855" [cb0247f0-9e7a-4d22-a4e6-db35a7b199ee] Running
	I1213 09:36:33.321305   50577 system_pods.go:89] "kube-apiserver-calico-949855" [43f43361-238a-4edf-acf1-b7563438a8a5] Running
	I1213 09:36:33.321310   50577 system_pods.go:89] "kube-controller-manager-calico-949855" [4a0a3c98-de85-4e66-b0df-c7c124ce9885] Running
	I1213 09:36:33.321316   50577 system_pods.go:89] "kube-proxy-vdgk9" [39b7aee3-6998-48f4-af6b-b629c648731f] Running
	I1213 09:36:33.321321   50577 system_pods.go:89] "kube-scheduler-calico-949855" [5ab00101-de5c-4ff3-a924-7afdd523a0fb] Running
	I1213 09:36:33.321326   50577 system_pods.go:89] "storage-provisioner" [95f85e24-f720-4da8-b6c0-3a3fe90edf56] Running
	I1213 09:36:33.321380   50577 retry.go:31] will retry after 779.818137ms: missing components: kube-dns
	I1213 09:36:34.114613   50577 system_pods.go:86] 9 kube-system pods found
	I1213 09:36:34.114650   50577 system_pods.go:89] "calico-kube-controllers-5c676f698c-mn5lt" [d090b034-f709-45a5-ae36-0df9d318f18f] Pending / Ready:ContainersNotReady (containers with unready status: [calico-kube-controllers]) / ContainersReady:ContainersNotReady (containers with unready status: [calico-kube-controllers])
	I1213 09:36:34.114663   50577 system_pods.go:89] "calico-node-ktf92" [3135da63-b9e4-4382-9209-e980d38f7768] Pending / Initialized:ContainersNotInitialized (containers with incomplete status: [ebpf-bootstrap]) / Ready:ContainersNotReady (containers with unready status: [calico-node]) / ContainersReady:ContainersNotReady (containers with unready status: [calico-node])
	I1213 09:36:34.114671   50577 system_pods.go:89] "coredns-66bc5c9577-g9rr9" [8710f185-92d4-4694-bcbf-258da3c3aee3] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1213 09:36:34.114677   50577 system_pods.go:89] "etcd-calico-949855" [cb0247f0-9e7a-4d22-a4e6-db35a7b199ee] Running
	I1213 09:36:34.114683   50577 system_pods.go:89] "kube-apiserver-calico-949855" [43f43361-238a-4edf-acf1-b7563438a8a5] Running
	I1213 09:36:34.114688   50577 system_pods.go:89] "kube-controller-manager-calico-949855" [4a0a3c98-de85-4e66-b0df-c7c124ce9885] Running
	I1213 09:36:34.114694   50577 system_pods.go:89] "kube-proxy-vdgk9" [39b7aee3-6998-48f4-af6b-b629c648731f] Running
	I1213 09:36:34.114699   50577 system_pods.go:89] "kube-scheduler-calico-949855" [5ab00101-de5c-4ff3-a924-7afdd523a0fb] Running
	I1213 09:36:34.114705   50577 system_pods.go:89] "storage-provisioner" [95f85e24-f720-4da8-b6c0-3a3fe90edf56] Running
	I1213 09:36:34.114723   50577 retry.go:31] will retry after 1.055249542s: missing components: kube-dns
	I1213 09:36:35.180653   50577 system_pods.go:86] 9 kube-system pods found
	I1213 09:36:35.180703   50577 system_pods.go:89] "calico-kube-controllers-5c676f698c-mn5lt" [d090b034-f709-45a5-ae36-0df9d318f18f] Pending / Ready:ContainersNotReady (containers with unready status: [calico-kube-controllers]) / ContainersReady:ContainersNotReady (containers with unready status: [calico-kube-controllers])
	I1213 09:36:35.180719   50577 system_pods.go:89] "calico-node-ktf92" [3135da63-b9e4-4382-9209-e980d38f7768] Pending / Initialized:ContainersNotInitialized (containers with incomplete status: [ebpf-bootstrap]) / Ready:ContainersNotReady (containers with unready status: [calico-node]) / ContainersReady:ContainersNotReady (containers with unready status: [calico-node])
	I1213 09:36:35.180729   50577 system_pods.go:89] "coredns-66bc5c9577-g9rr9" [8710f185-92d4-4694-bcbf-258da3c3aee3] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1213 09:36:35.180735   50577 system_pods.go:89] "etcd-calico-949855" [cb0247f0-9e7a-4d22-a4e6-db35a7b199ee] Running
	I1213 09:36:35.180743   50577 system_pods.go:89] "kube-apiserver-calico-949855" [43f43361-238a-4edf-acf1-b7563438a8a5] Running
	I1213 09:36:35.180752   50577 system_pods.go:89] "kube-controller-manager-calico-949855" [4a0a3c98-de85-4e66-b0df-c7c124ce9885] Running
	I1213 09:36:35.180766   50577 system_pods.go:89] "kube-proxy-vdgk9" [39b7aee3-6998-48f4-af6b-b629c648731f] Running
	I1213 09:36:35.180771   50577 system_pods.go:89] "kube-scheduler-calico-949855" [5ab00101-de5c-4ff3-a924-7afdd523a0fb] Running
	I1213 09:36:35.180776   50577 system_pods.go:89] "storage-provisioner" [95f85e24-f720-4da8-b6c0-3a3fe90edf56] Running
	I1213 09:36:35.180795   50577 retry.go:31] will retry after 1.139802374s: missing components: kube-dns
	I1213 09:36:36.331485   50577 system_pods.go:86] 9 kube-system pods found
	I1213 09:36:36.331533   50577 system_pods.go:89] "calico-kube-controllers-5c676f698c-mn5lt" [d090b034-f709-45a5-ae36-0df9d318f18f] Pending / Ready:ContainersNotReady (containers with unready status: [calico-kube-controllers]) / ContainersReady:ContainersNotReady (containers with unready status: [calico-kube-controllers])
	I1213 09:36:36.331548   50577 system_pods.go:89] "calico-node-ktf92" [3135da63-b9e4-4382-9209-e980d38f7768] Pending / Initialized:ContainersNotInitialized (containers with incomplete status: [ebpf-bootstrap]) / Ready:ContainersNotReady (containers with unready status: [calico-node]) / ContainersReady:ContainersNotReady (containers with unready status: [calico-node])
	I1213 09:36:36.331560   50577 system_pods.go:89] "coredns-66bc5c9577-g9rr9" [8710f185-92d4-4694-bcbf-258da3c3aee3] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1213 09:36:36.331567   50577 system_pods.go:89] "etcd-calico-949855" [cb0247f0-9e7a-4d22-a4e6-db35a7b199ee] Running
	I1213 09:36:36.331574   50577 system_pods.go:89] "kube-apiserver-calico-949855" [43f43361-238a-4edf-acf1-b7563438a8a5] Running
	I1213 09:36:36.331583   50577 system_pods.go:89] "kube-controller-manager-calico-949855" [4a0a3c98-de85-4e66-b0df-c7c124ce9885] Running
	I1213 09:36:36.331596   50577 system_pods.go:89] "kube-proxy-vdgk9" [39b7aee3-6998-48f4-af6b-b629c648731f] Running
	I1213 09:36:36.331603   50577 system_pods.go:89] "kube-scheduler-calico-949855" [5ab00101-de5c-4ff3-a924-7afdd523a0fb] Running
	I1213 09:36:36.331608   50577 system_pods.go:89] "storage-provisioner" [95f85e24-f720-4da8-b6c0-3a3fe90edf56] Running
	I1213 09:36:36.331629   50577 retry.go:31] will retry after 1.874316657s: missing components: kube-dns
	I1213 09:36:34.880563   51535 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1213 09:36:34.880921   51535 addons.go:239] Setting addon default-storageclass=true in "custom-flannel-949855"
	I1213 09:36:34.880971   51535 host.go:66] Checking if "custom-flannel-949855" exists ...
	I1213 09:36:34.881855   51535 addons.go:436] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I1213 09:36:34.881874   51535 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I1213 09:36:34.883942   51535 addons.go:436] installing /etc/kubernetes/addons/storageclass.yaml
	I1213 09:36:34.883967   51535 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I1213 09:36:34.884896   51535 main.go:143] libmachine: domain custom-flannel-949855 has defined MAC address 52:54:00:9e:e4:79 in network mk-custom-flannel-949855
	I1213 09:36:34.885677   51535 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:9e:e4:79", ip: ""} in network mk-custom-flannel-949855: {Iface:virbr5 ExpiryTime:2025-12-13 10:35:59 +0000 UTC Type:0 Mac:52:54:00:9e:e4:79 Iaid: IPaddr:192.168.83.251 Prefix:24 Hostname:custom-flannel-949855 Clientid:01:52:54:00:9e:e4:79}
	I1213 09:36:34.885719   51535 main.go:143] libmachine: domain custom-flannel-949855 has defined IP address 192.168.83.251 and MAC address 52:54:00:9e:e4:79 in network mk-custom-flannel-949855
	I1213 09:36:34.886070   51535 sshutil.go:53] new ssh client: &{IP:192.168.83.251 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/22128-9390/.minikube/machines/custom-flannel-949855/id_rsa Username:docker}
	I1213 09:36:34.887037   51535 main.go:143] libmachine: domain custom-flannel-949855 has defined MAC address 52:54:00:9e:e4:79 in network mk-custom-flannel-949855
	I1213 09:36:34.887971   51535 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:9e:e4:79", ip: ""} in network mk-custom-flannel-949855: {Iface:virbr5 ExpiryTime:2025-12-13 10:35:59 +0000 UTC Type:0 Mac:52:54:00:9e:e4:79 Iaid: IPaddr:192.168.83.251 Prefix:24 Hostname:custom-flannel-949855 Clientid:01:52:54:00:9e:e4:79}
	I1213 09:36:34.888010   51535 main.go:143] libmachine: domain custom-flannel-949855 has defined IP address 192.168.83.251 and MAC address 52:54:00:9e:e4:79 in network mk-custom-flannel-949855
	I1213 09:36:34.888450   51535 sshutil.go:53] new ssh client: &{IP:192.168.83.251 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/22128-9390/.minikube/machines/custom-flannel-949855/id_rsa Username:docker}
	I1213 09:36:34.889609   51535 main.go:143] libmachine: domain custom-flannel-949855 has defined MAC address 52:54:00:9e:e4:79 in network mk-custom-flannel-949855
	I1213 09:36:34.890165   51535 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:9e:e4:79", ip: ""} in network mk-custom-flannel-949855: {Iface:virbr5 ExpiryTime:2025-12-13 10:35:59 +0000 UTC Type:0 Mac:52:54:00:9e:e4:79 Iaid: IPaddr:192.168.83.251 Prefix:24 Hostname:custom-flannel-949855 Clientid:01:52:54:00:9e:e4:79}
	I1213 09:36:34.890195   51535 main.go:143] libmachine: domain custom-flannel-949855 has defined IP address 192.168.83.251 and MAC address 52:54:00:9e:e4:79 in network mk-custom-flannel-949855
	I1213 09:36:34.890401   51535 sshutil.go:53] new ssh client: &{IP:192.168.83.251 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/22128-9390/.minikube/machines/custom-flannel-949855/id_rsa Username:docker}
	I1213 09:36:35.591006   51535 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.2/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I1213 09:36:35.718834   51535 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.2/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1213 09:36:35.916022   51535 ssh_runner.go:235] Completed: sudo systemctl daemon-reload: (1.035409951s)
	I1213 09:36:35.916108   51535 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1213 09:36:35.916124   51535 ssh_runner.go:235] Completed: docker images --format {{.Repository}}:{{.Tag}}: (1.036121711s)
	I1213 09:36:35.916161   51535 docker.go:691] Got preloaded images: -- stdout --
	registry.k8s.io/kube-apiserver:v1.34.2
	registry.k8s.io/kube-controller-manager:v1.34.2
	registry.k8s.io/kube-scheduler:v1.34.2
	registry.k8s.io/kube-proxy:v1.34.2
	registry.k8s.io/etcd:3.6.5-0
	registry.k8s.io/pause:3.10.1
	registry.k8s.io/coredns/coredns:v1.12.1
	gcr.io/k8s-minikube/storage-provisioner:v5
	
	-- /stdout --
	I1213 09:36:35.916037   51535 ssh_runner.go:235] Completed: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.2/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml": (1.041317057s)
	I1213 09:36:35.916171   51535 docker.go:697] gcr.io/k8s-minikube/gvisor-addon:2 wasn't preloaded
	I1213 09:36:35.916179   51535 cache_images.go:90] LoadCachedImages start: [gcr.io/k8s-minikube/gvisor-addon:2]
	I1213 09:36:35.916367   51535 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.2/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.83.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.34.2/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -"
	I1213 09:36:35.918742   51535 ssh_runner.go:195] Run: docker image inspect --format {{.Id}} gcr.io/k8s-minikube/gvisor-addon:2
	I1213 09:36:37.123565   51535 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.2/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: (1.404682689s)
	I1213 09:36:37.123670   51535 ssh_runner.go:235] Completed: sudo systemctl start kubelet: (1.207544979s)
	I1213 09:36:37.123929   51535 ssh_runner.go:235] Completed: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.2/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.83.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.34.2/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -": (1.20753791s)
	I1213 09:36:37.123954   51535 start.go:977] {"host.minikube.internal": 192.168.83.1} host record injected into CoreDNS's ConfigMap
	I1213 09:36:37.124039   51535 ssh_runner.go:235] Completed: docker image inspect --format {{.Id}} gcr.io/k8s-minikube/gvisor-addon:2: (1.205256516s)
	I1213 09:36:37.124081   51535 cache_images.go:118] "gcr.io/k8s-minikube/gvisor-addon:2" needs transfer: "gcr.io/k8s-minikube/gvisor-addon:2" does not exist at hash "sha256:3b59a93df63497f2242eafd7e18fd26aff9ead0899361b8a5f7ac2e648ba898e" in container runtime
	I1213 09:36:37.124115   51535 docker.go:338] Removing image: gcr.io/k8s-minikube/gvisor-addon:2
	I1213 09:36:37.124159   51535 ssh_runner.go:195] Run: docker rmi gcr.io/k8s-minikube/gvisor-addon:2
	I1213 09:36:37.124997   51535 node_ready.go:35] waiting up to 15m0s for node "custom-flannel-949855" to be "Ready" ...
	I1213 09:36:37.125726   51535 out.go:179] * Enabled addons: default-storageclass, storage-provisioner
	I1213 09:36:32.942036   52766 out.go:252] * Creating kvm2 VM (CPUs=2, Memory=3072MB, Disk=20000MB) ...
	I1213 09:36:32.942240   52766 start.go:159] libmachine.API.Create for "false-949855" (driver="kvm2")
	I1213 09:36:32.942269   52766 client.go:173] LocalClient.Create starting
	I1213 09:36:32.942387   52766 main.go:143] libmachine: Reading certificate data from /home/jenkins/minikube-integration/22128-9390/.minikube/certs/ca.pem
	I1213 09:36:32.942441   52766 main.go:143] libmachine: Decoding PEM data...
	I1213 09:36:32.942470   52766 main.go:143] libmachine: Parsing certificate...
	I1213 09:36:32.942534   52766 main.go:143] libmachine: Reading certificate data from /home/jenkins/minikube-integration/22128-9390/.minikube/certs/cert.pem
	I1213 09:36:32.942573   52766 main.go:143] libmachine: Decoding PEM data...
	I1213 09:36:32.942591   52766 main.go:143] libmachine: Parsing certificate...
	I1213 09:36:32.942975   52766 main.go:143] libmachine: creating domain...
	I1213 09:36:32.942987   52766 main.go:143] libmachine: creating network...
	I1213 09:36:32.945746   52766 main.go:143] libmachine: found existing default network
	I1213 09:36:32.945931   52766 main.go:143] libmachine: <network connections='4'>
	  <name>default</name>
	  <uuid>c61344c2-dba2-46dd-a21a-34776d235985</uuid>
	  <forward mode='nat'>
	    <nat>
	      <port start='1024' end='65535'/>
	    </nat>
	  </forward>
	  <bridge name='virbr0' stp='on' delay='0'/>
	  <mac address='52:54:00:10:a2:1d'/>
	  <ip address='192.168.122.1' netmask='255.255.255.0'>
	    <dhcp>
	      <range start='192.168.122.2' end='192.168.122.254'/>
	    </dhcp>
	  </ip>
	</network>
	
	I1213 09:36:32.946996   52766 network.go:211] skipping subnet 192.168.39.0/24 that is taken: &{IP:192.168.39.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.39.0/24 Gateway:192.168.39.1 ClientMin:192.168.39.2 ClientMax:192.168.39.254 Broadcast:192.168.39.255 IsPrivate:true Interface:{IfaceName:virbr1 IfaceIPv4:192.168.39.1 IfaceMTU:1500 IfaceMAC:52:54:00:92:0a:2a} reservation:<nil>}
	I1213 09:36:32.947782   52766 network.go:211] skipping subnet 192.168.50.0/24 that is taken: &{IP:192.168.50.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.50.0/24 Gateway:192.168.50.1 ClientMin:192.168.50.2 ClientMax:192.168.50.254 Broadcast:192.168.50.255 IsPrivate:true Interface:{IfaceName:virbr2 IfaceIPv4:192.168.50.1 IfaceMTU:1500 IfaceMAC:52:54:00:3f:3d:2c} reservation:<nil>}
	I1213 09:36:32.948796   52766 network.go:206] using free private subnet 192.168.61.0/24: &{IP:192.168.61.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.61.0/24 Gateway:192.168.61.1 ClientMin:192.168.61.2 ClientMax:192.168.61.254 Broadcast:192.168.61.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:0xc001ebc2e0}
	I1213 09:36:32.948903   52766 main.go:143] libmachine: defining private network:
	
	<network>
	  <name>mk-false-949855</name>
	  <dns enable='no'/>
	  <ip address='192.168.61.1' netmask='255.255.255.0'>
	    <dhcp>
	      <range start='192.168.61.2' end='192.168.61.253'/>
	    </dhcp>
	  </ip>
	</network>
	
	I1213 09:36:32.955737   52766 main.go:143] libmachine: creating private network mk-false-949855 192.168.61.0/24...
	I1213 09:36:33.054561   52766 main.go:143] libmachine: private network mk-false-949855 192.168.61.0/24 created
	I1213 09:36:33.055066   52766 main.go:143] libmachine: <network>
	  <name>mk-false-949855</name>
	  <uuid>1d3be71d-800e-4eda-816b-b80fe9958d0b</uuid>
	  <bridge name='virbr3' stp='on' delay='0'/>
	  <mac address='52:54:00:75:81:cc'/>
	  <dns enable='no'/>
	  <ip address='192.168.61.1' netmask='255.255.255.0'>
	    <dhcp>
	      <range start='192.168.61.2' end='192.168.61.253'/>
	    </dhcp>
	  </ip>
	</network>
	
	I1213 09:36:33.055111   52766 main.go:143] libmachine: setting up store path in /home/jenkins/minikube-integration/22128-9390/.minikube/machines/false-949855 ...
	I1213 09:36:33.055141   52766 main.go:143] libmachine: building disk image from file:///home/jenkins/minikube-integration/22128-9390/.minikube/cache/iso/amd64/minikube-v1.37.0-1765481609-22101-amd64.iso
	I1213 09:36:33.055154   52766 common.go:152] Making disk image using store path: /home/jenkins/minikube-integration/22128-9390/.minikube
	I1213 09:36:33.055224   52766 main.go:143] libmachine: Downloading /home/jenkins/minikube-integration/22128-9390/.minikube/cache/boot2docker.iso from file:///home/jenkins/minikube-integration/22128-9390/.minikube/cache/iso/amd64/minikube-v1.37.0-1765481609-22101-amd64.iso...
	I1213 09:36:33.363325   52766 common.go:159] Creating ssh key: /home/jenkins/minikube-integration/22128-9390/.minikube/machines/false-949855/id_rsa...
	I1213 09:36:33.455792   52766 common.go:165] Creating raw disk image: /home/jenkins/minikube-integration/22128-9390/.minikube/machines/false-949855/false-949855.rawdisk...
	I1213 09:36:33.455864   52766 main.go:143] libmachine: Writing magic tar header
	I1213 09:36:33.455889   52766 main.go:143] libmachine: Writing SSH key tar header
	I1213 09:36:33.455970   52766 common.go:179] Fixing permissions on /home/jenkins/minikube-integration/22128-9390/.minikube/machines/false-949855 ...
	I1213 09:36:33.456031   52766 main.go:143] libmachine: checking permissions on dir: /home/jenkins/minikube-integration/22128-9390/.minikube/machines/false-949855
	I1213 09:36:33.456053   52766 main.go:143] libmachine: setting executable bit set on /home/jenkins/minikube-integration/22128-9390/.minikube/machines/false-949855 (perms=drwx------)
	I1213 09:36:33.456068   52766 main.go:143] libmachine: checking permissions on dir: /home/jenkins/minikube-integration/22128-9390/.minikube/machines
	I1213 09:36:33.456078   52766 main.go:143] libmachine: setting executable bit set on /home/jenkins/minikube-integration/22128-9390/.minikube/machines (perms=drwxr-xr-x)
	I1213 09:36:33.456088   52766 main.go:143] libmachine: checking permissions on dir: /home/jenkins/minikube-integration/22128-9390/.minikube
	I1213 09:36:33.456097   52766 main.go:143] libmachine: setting executable bit set on /home/jenkins/minikube-integration/22128-9390/.minikube (perms=drwxr-xr-x)
	I1213 09:36:33.456104   52766 main.go:143] libmachine: checking permissions on dir: /home/jenkins/minikube-integration/22128-9390
	I1213 09:36:33.456113   52766 main.go:143] libmachine: setting executable bit set on /home/jenkins/minikube-integration/22128-9390 (perms=drwxrwxr-x)
	I1213 09:36:33.456122   52766 main.go:143] libmachine: checking permissions on dir: /home/jenkins/minikube-integration
	I1213 09:36:33.456132   52766 main.go:143] libmachine: setting executable bit set on /home/jenkins/minikube-integration (perms=drwxrwxr-x)
	I1213 09:36:33.456140   52766 main.go:143] libmachine: checking permissions on dir: /home/jenkins
	I1213 09:36:33.456147   52766 main.go:143] libmachine: setting executable bit set on /home/jenkins (perms=drwxr-xr-x)
	I1213 09:36:33.456155   52766 main.go:143] libmachine: checking permissions on dir: /home
	I1213 09:36:33.456162   52766 main.go:143] libmachine: skipping /home - not owner
	I1213 09:36:33.456165   52766 main.go:143] libmachine: defining domain...
	I1213 09:36:33.457580   52766 main.go:143] libmachine: defining domain using XML: 
	<domain type='kvm'>
	  <name>false-949855</name>
	  <memory unit='MiB'>3072</memory>
	  <vcpu>2</vcpu>
	  <features>
	    <acpi/>
	    <apic/>
	    <pae/>
	  </features>
	  <cpu mode='host-passthrough'>
	  </cpu>
	  <os>
	    <type>hvm</type>
	    <boot dev='cdrom'/>
	    <boot dev='hd'/>
	    <bootmenu enable='no'/>
	  </os>
	  <devices>
	    <disk type='file' device='cdrom'>
	      <source file='/home/jenkins/minikube-integration/22128-9390/.minikube/machines/false-949855/boot2docker.iso'/>
	      <target dev='hdc' bus='scsi'/>
	      <readonly/>
	    </disk>
	    <disk type='file' device='disk'>
	      <driver name='qemu' type='raw' cache='default' io='threads' />
	      <source file='/home/jenkins/minikube-integration/22128-9390/.minikube/machines/false-949855/false-949855.rawdisk'/>
	      <target dev='hda' bus='virtio'/>
	    </disk>
	    <interface type='network'>
	      <source network='mk-false-949855'/>
	      <model type='virtio'/>
	    </interface>
	    <interface type='network'>
	      <source network='default'/>
	      <model type='virtio'/>
	    </interface>
	    <serial type='pty'>
	      <target port='0'/>
	    </serial>
	    <console type='pty'>
	      <target type='serial' port='0'/>
	    </console>
	    <rng model='virtio'>
	      <backend model='random'>/dev/random</backend>
	    </rng>
	  </devices>
	</domain>
	
	I1213 09:36:33.466815   52766 main.go:143] libmachine: domain false-949855 has defined MAC address 52:54:00:84:b8:af in network default
	I1213 09:36:33.467785   52766 main.go:143] libmachine: domain false-949855 has defined MAC address 52:54:00:f2:18:fc in network mk-false-949855
	I1213 09:36:33.467808   52766 main.go:143] libmachine: starting domain...
	I1213 09:36:33.467812   52766 main.go:143] libmachine: ensuring networks are active...
	I1213 09:36:33.468839   52766 main.go:143] libmachine: Ensuring network default is active
	I1213 09:36:33.469438   52766 main.go:143] libmachine: Ensuring network mk-false-949855 is active
	I1213 09:36:33.470524   52766 main.go:143] libmachine: getting domain XML...
	I1213 09:36:33.472111   52766 main.go:143] libmachine: starting domain XML:
	<domain type='kvm'>
	  <name>false-949855</name>
	  <uuid>ab7f1c5a-e30f-44dd-a282-41f48a9acd2a</uuid>
	  <memory unit='KiB'>3145728</memory>
	  <currentMemory unit='KiB'>3145728</currentMemory>
	  <vcpu placement='static'>2</vcpu>
	  <os>
	    <type arch='x86_64' machine='pc-i440fx-jammy'>hvm</type>
	    <boot dev='cdrom'/>
	    <boot dev='hd'/>
	    <bootmenu enable='no'/>
	  </os>
	  <features>
	    <acpi/>
	    <apic/>
	    <pae/>
	  </features>
	  <cpu mode='host-passthrough' check='none' migratable='on'/>
	  <clock offset='utc'/>
	  <on_poweroff>destroy</on_poweroff>
	  <on_reboot>restart</on_reboot>
	  <on_crash>destroy</on_crash>
	  <devices>
	    <emulator>/usr/bin/qemu-system-x86_64</emulator>
	    <disk type='file' device='cdrom'>
	      <driver name='qemu' type='raw'/>
	      <source file='/home/jenkins/minikube-integration/22128-9390/.minikube/machines/false-949855/boot2docker.iso'/>
	      <target dev='hdc' bus='scsi'/>
	      <readonly/>
	      <address type='drive' controller='0' bus='0' target='0' unit='2'/>
	    </disk>
	    <disk type='file' device='disk'>
	      <driver name='qemu' type='raw' io='threads'/>
	      <source file='/home/jenkins/minikube-integration/22128-9390/.minikube/machines/false-949855/false-949855.rawdisk'/>
	      <target dev='hda' bus='virtio'/>
	      <address type='pci' domain='0x0000' bus='0x00' slot='0x05' function='0x0'/>
	    </disk>
	    <controller type='usb' index='0' model='piix3-uhci'>
	      <address type='pci' domain='0x0000' bus='0x00' slot='0x01' function='0x2'/>
	    </controller>
	    <controller type='pci' index='0' model='pci-root'/>
	    <controller type='scsi' index='0' model='lsilogic'>
	      <address type='pci' domain='0x0000' bus='0x00' slot='0x04' function='0x0'/>
	    </controller>
	    <interface type='network'>
	      <mac address='52:54:00:f2:18:fc'/>
	      <source network='mk-false-949855'/>
	      <model type='virtio'/>
	      <address type='pci' domain='0x0000' bus='0x00' slot='0x02' function='0x0'/>
	    </interface>
	    <interface type='network'>
	      <mac address='52:54:00:84:b8:af'/>
	      <source network='default'/>
	      <model type='virtio'/>
	      <address type='pci' domain='0x0000' bus='0x00' slot='0x03' function='0x0'/>
	    </interface>
	    <serial type='pty'>
	      <target type='isa-serial' port='0'>
	        <model name='isa-serial'/>
	      </target>
	    </serial>
	    <console type='pty'>
	      <target type='serial' port='0'/>
	    </console>
	    <input type='mouse' bus='ps2'/>
	    <input type='keyboard' bus='ps2'/>
	    <audio id='1' type='none'/>
	    <memballoon model='virtio'>
	      <address type='pci' domain='0x0000' bus='0x00' slot='0x06' function='0x0'/>
	    </memballoon>
	    <rng model='virtio'>
	      <backend model='random'>/dev/random</backend>
	      <address type='pci' domain='0x0000' bus='0x00' slot='0x07' function='0x0'/>
	    </rng>
	  </devices>
	</domain>
	
	I1213 09:36:35.139253   52766 main.go:143] libmachine: waiting for domain to start...
	I1213 09:36:35.140924   52766 main.go:143] libmachine: domain is now running
	I1213 09:36:35.140948   52766 main.go:143] libmachine: waiting for IP...
	I1213 09:36:35.142097   52766 main.go:143] libmachine: domain false-949855 has defined MAC address 52:54:00:f2:18:fc in network mk-false-949855
	I1213 09:36:35.142903   52766 main.go:143] libmachine: no network interface addresses found for domain false-949855 (source=lease)
	I1213 09:36:35.142937   52766 main.go:143] libmachine: trying to list again with source=arp
	I1213 09:36:35.143406   52766 main.go:143] libmachine: unable to find current IP address of domain false-949855 in network mk-false-949855 (interfaces detected: [])
	I1213 09:36:35.143479   52766 retry.go:31] will retry after 255.519964ms: waiting for domain to come up
	I1213 09:36:35.401410   52766 main.go:143] libmachine: domain false-949855 has defined MAC address 52:54:00:f2:18:fc in network mk-false-949855
	I1213 09:36:35.402406   52766 main.go:143] libmachine: no network interface addresses found for domain false-949855 (source=lease)
	I1213 09:36:35.402429   52766 main.go:143] libmachine: trying to list again with source=arp
	I1213 09:36:35.402949   52766 main.go:143] libmachine: unable to find current IP address of domain false-949855 in network mk-false-949855 (interfaces detected: [])
	I1213 09:36:35.402996   52766 retry.go:31] will retry after 388.363121ms: waiting for domain to come up
	I1213 09:36:35.792777   52766 main.go:143] libmachine: domain false-949855 has defined MAC address 52:54:00:f2:18:fc in network mk-false-949855
	I1213 09:36:35.793790   52766 main.go:143] libmachine: no network interface addresses found for domain false-949855 (source=lease)
	I1213 09:36:35.793813   52766 main.go:143] libmachine: trying to list again with source=arp
	I1213 09:36:35.794605   52766 main.go:143] libmachine: unable to find current IP address of domain false-949855 in network mk-false-949855 (interfaces detected: [])
	I1213 09:36:35.794648   52766 retry.go:31] will retry after 406.715621ms: waiting for domain to come up
	I1213 09:36:36.203685   52766 main.go:143] libmachine: domain false-949855 has defined MAC address 52:54:00:f2:18:fc in network mk-false-949855
	I1213 09:36:36.204646   52766 main.go:143] libmachine: no network interface addresses found for domain false-949855 (source=lease)
	I1213 09:36:36.204672   52766 main.go:143] libmachine: trying to list again with source=arp
	I1213 09:36:36.205190   52766 main.go:143] libmachine: unable to find current IP address of domain false-949855 in network mk-false-949855 (interfaces detected: [])
	I1213 09:36:36.205233   52766 retry.go:31] will retry after 462.279363ms: waiting for domain to come up
	I1213 09:36:36.669862   52766 main.go:143] libmachine: domain false-949855 has defined MAC address 52:54:00:f2:18:fc in network mk-false-949855
	I1213 09:36:36.670863   52766 main.go:143] libmachine: no network interface addresses found for domain false-949855 (source=lease)
	I1213 09:36:36.670891   52766 main.go:143] libmachine: trying to list again with source=arp
	I1213 09:36:36.671532   52766 main.go:143] libmachine: unable to find current IP address of domain false-949855 in network mk-false-949855 (interfaces detected: [])
	I1213 09:36:36.671576   52766 retry.go:31] will retry after 717.301952ms: waiting for domain to come up
	I1213 09:36:37.390701   52766 main.go:143] libmachine: domain false-949855 has defined MAC address 52:54:00:f2:18:fc in network mk-false-949855
	I1213 09:36:37.391903   52766 main.go:143] libmachine: no network interface addresses found for domain false-949855 (source=lease)
	I1213 09:36:37.391928   52766 main.go:143] libmachine: trying to list again with source=arp
	I1213 09:36:37.392632   52766 main.go:143] libmachine: unable to find current IP address of domain false-949855 in network mk-false-949855 (interfaces detected: [])
	I1213 09:36:37.392700   52766 retry.go:31] will retry after 844.713524ms: waiting for domain to come up
	I1213 09:36:37.127126   51535 addons.go:530] duration metric: took 2.252429436s for enable addons: enabled=[default-storageclass storage-provisioner]
	I1213 09:36:37.161666   51535 cache_images.go:291] Loading image from: /home/jenkins/minikube-integration/22128-9390/.minikube/cache/images/amd64/gcr.io/k8s-minikube/gvisor-addon_2
	I1213 09:36:37.161821   51535 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/images/gvisor-addon_2
	I1213 09:36:37.181625   51535 ssh_runner.go:352] existence check for /var/lib/minikube/images/gvisor-addon_2: stat -c "%s %y" /var/lib/minikube/images/gvisor-addon_2: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/images/gvisor-addon_2': No such file or directory
	I1213 09:36:37.181671   51535 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22128-9390/.minikube/cache/images/amd64/gcr.io/k8s-minikube/gvisor-addon_2 --> /var/lib/minikube/images/gvisor-addon_2 (12606976 bytes)
	I1213 09:36:37.633572   51535 kapi.go:214] "coredns" deployment in "kube-system" namespace and "custom-flannel-949855" context rescaled to 1 replicas
	I1213 09:36:37.760162   51535 docker.go:305] Loading image: /var/lib/minikube/images/gvisor-addon_2
	I1213 09:36:37.760209   51535 ssh_runner.go:195] Run: /bin/bash -c "sudo cat /var/lib/minikube/images/gvisor-addon_2 | docker load"
	I1213 09:36:38.836731   51535 ssh_runner.go:235] Completed: /bin/bash -c "sudo cat /var/lib/minikube/images/gvisor-addon_2 | docker load": (1.076497445s)
	I1213 09:36:38.836759   51535 cache_images.go:323] Transferred and loaded /home/jenkins/minikube-integration/22128-9390/.minikube/cache/images/amd64/gcr.io/k8s-minikube/gvisor-addon_2 from cache
	I1213 09:36:38.836798   51535 cache_images.go:125] Successfully loaded all cached images
	I1213 09:36:38.836806   51535 cache_images.go:94] duration metric: took 2.92061611s to LoadCachedImages
	I1213 09:36:38.836814   51535 cache_images.go:264] succeeded pushing to: custom-flannel-949855
	W1213 09:36:39.133273   51535 node_ready.go:57] node "custom-flannel-949855" has "Ready":"False" status (will retry)
	I1213 09:36:38.233302   50577 system_pods.go:86] 9 kube-system pods found
	I1213 09:36:38.233365   50577 system_pods.go:89] "calico-kube-controllers-5c676f698c-mn5lt" [d090b034-f709-45a5-ae36-0df9d318f18f] Pending / Ready:ContainersNotReady (containers with unready status: [calico-kube-controllers]) / ContainersReady:ContainersNotReady (containers with unready status: [calico-kube-controllers])
	I1213 09:36:38.233381   50577 system_pods.go:89] "calico-node-ktf92" [3135da63-b9e4-4382-9209-e980d38f7768] Pending / Initialized:ContainersNotInitialized (containers with incomplete status: [ebpf-bootstrap]) / Ready:ContainersNotReady (containers with unready status: [calico-node]) / ContainersReady:ContainersNotReady (containers with unready status: [calico-node])
	I1213 09:36:38.233392   50577 system_pods.go:89] "coredns-66bc5c9577-g9rr9" [8710f185-92d4-4694-bcbf-258da3c3aee3] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1213 09:36:38.233398   50577 system_pods.go:89] "etcd-calico-949855" [cb0247f0-9e7a-4d22-a4e6-db35a7b199ee] Running
	I1213 09:36:38.233405   50577 system_pods.go:89] "kube-apiserver-calico-949855" [43f43361-238a-4edf-acf1-b7563438a8a5] Running
	I1213 09:36:38.233418   50577 system_pods.go:89] "kube-controller-manager-calico-949855" [4a0a3c98-de85-4e66-b0df-c7c124ce9885] Running
	I1213 09:36:38.233424   50577 system_pods.go:89] "kube-proxy-vdgk9" [39b7aee3-6998-48f4-af6b-b629c648731f] Running
	I1213 09:36:38.233431   50577 system_pods.go:89] "kube-scheduler-calico-949855" [5ab00101-de5c-4ff3-a924-7afdd523a0fb] Running
	I1213 09:36:38.233437   50577 system_pods.go:89] "storage-provisioner" [95f85e24-f720-4da8-b6c0-3a3fe90edf56] Running
	I1213 09:36:38.233456   50577 retry.go:31] will retry after 2.187269566s: missing components: kube-dns
	I1213 09:36:40.439567   50577 system_pods.go:86] 9 kube-system pods found
	I1213 09:36:40.439610   50577 system_pods.go:89] "calico-kube-controllers-5c676f698c-mn5lt" [d090b034-f709-45a5-ae36-0df9d318f18f] Pending / Ready:ContainersNotReady (containers with unready status: [calico-kube-controllers]) / ContainersReady:ContainersNotReady (containers with unready status: [calico-kube-controllers])
	I1213 09:36:40.439623   50577 system_pods.go:89] "calico-node-ktf92" [3135da63-b9e4-4382-9209-e980d38f7768] Pending / Initialized:ContainersNotInitialized (containers with incomplete status: [ebpf-bootstrap]) / Ready:ContainersNotReady (containers with unready status: [calico-node]) / ContainersReady:ContainersNotReady (containers with unready status: [calico-node])
	I1213 09:36:40.439633   50577 system_pods.go:89] "coredns-66bc5c9577-g9rr9" [8710f185-92d4-4694-bcbf-258da3c3aee3] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1213 09:36:40.439639   50577 system_pods.go:89] "etcd-calico-949855" [cb0247f0-9e7a-4d22-a4e6-db35a7b199ee] Running
	I1213 09:36:40.439646   50577 system_pods.go:89] "kube-apiserver-calico-949855" [43f43361-238a-4edf-acf1-b7563438a8a5] Running
	I1213 09:36:40.439652   50577 system_pods.go:89] "kube-controller-manager-calico-949855" [4a0a3c98-de85-4e66-b0df-c7c124ce9885] Running
	I1213 09:36:40.439657   50577 system_pods.go:89] "kube-proxy-vdgk9" [39b7aee3-6998-48f4-af6b-b629c648731f] Running
	I1213 09:36:40.439662   50577 system_pods.go:89] "kube-scheduler-calico-949855" [5ab00101-de5c-4ff3-a924-7afdd523a0fb] Running
	I1213 09:36:40.439667   50577 system_pods.go:89] "storage-provisioner" [95f85e24-f720-4da8-b6c0-3a3fe90edf56] Running
	I1213 09:36:40.439687   50577 retry.go:31] will retry after 3.160505042s: missing components: kube-dns
	I1213 09:36:38.241979   52766 main.go:143] libmachine: domain false-949855 has defined MAC address 52:54:00:f2:18:fc in network mk-false-949855
	I1213 09:36:38.242900   52766 main.go:143] libmachine: no network interface addresses found for domain false-949855 (source=lease)
	I1213 09:36:38.242924   52766 main.go:143] libmachine: trying to list again with source=arp
	I1213 09:36:38.243523   52766 main.go:143] libmachine: unable to find current IP address of domain false-949855 in network mk-false-949855 (interfaces detected: [])
	I1213 09:36:38.243594   52766 retry.go:31] will retry after 809.171625ms: waiting for domain to come up
	I1213 09:36:39.054087   52766 main.go:143] libmachine: domain false-949855 has defined MAC address 52:54:00:f2:18:fc in network mk-false-949855
	I1213 09:36:39.055102   52766 main.go:143] libmachine: no network interface addresses found for domain false-949855 (source=lease)
	I1213 09:36:39.055131   52766 main.go:143] libmachine: trying to list again with source=arp
	I1213 09:36:39.055669   52766 main.go:143] libmachine: unable to find current IP address of domain false-949855 in network mk-false-949855 (interfaces detected: [])
	I1213 09:36:39.055715   52766 retry.go:31] will retry after 1.478366562s: waiting for domain to come up
	I1213 09:36:40.535505   52766 main.go:143] libmachine: domain false-949855 has defined MAC address 52:54:00:f2:18:fc in network mk-false-949855
	I1213 09:36:40.536406   52766 main.go:143] libmachine: no network interface addresses found for domain false-949855 (source=lease)
	I1213 09:36:40.536432   52766 main.go:143] libmachine: trying to list again with source=arp
	I1213 09:36:40.536872   52766 main.go:143] libmachine: unable to find current IP address of domain false-949855 in network mk-false-949855 (interfaces detected: [])
	I1213 09:36:40.536914   52766 retry.go:31] will retry after 1.596267498s: waiting for domain to come up
	I1213 09:36:42.137419   52766 main.go:143] libmachine: domain false-949855 has defined MAC address 52:54:00:f2:18:fc in network mk-false-949855
	I1213 09:36:42.138679   52766 main.go:143] libmachine: no network interface addresses found for domain false-949855 (source=lease)
	I1213 09:36:42.138702   52766 main.go:143] libmachine: trying to list again with source=arp
	I1213 09:36:42.139433   52766 main.go:143] libmachine: unable to find current IP address of domain false-949855 in network mk-false-949855 (interfaces detected: [])
	I1213 09:36:42.139504   52766 retry.go:31] will retry after 1.971891964s: waiting for domain to come up
	W1213 09:36:41.633456   51535 node_ready.go:57] node "custom-flannel-949855" has "Ready":"False" status (will retry)
	W1213 09:36:43.635140   51535 node_ready.go:57] node "custom-flannel-949855" has "Ready":"False" status (will retry)
	
	
	==> Docker <==
	Dec 13 09:35:44 default-k8s-diff-port-018953 dockerd[1181]: time="2025-12-13T09:35:44.376831829Z" level=warning msg="reference for unknown type: application/vnd.docker.distribution.manifest.v1+prettyjws" digest="sha256:5d99aa1120524c801bc8c1a7077e8f5ec122ba16b6dda1a5d3826057f67b9bcb" remote="registry.k8s.io/echoserver:1.4"
	Dec 13 09:35:44 default-k8s-diff-port-018953 dockerd[1181]: time="2025-12-13T09:35:44.500522910Z" level=warning msg="Error persisting manifest" digest="sha256:5d99aa1120524c801bc8c1a7077e8f5ec122ba16b6dda1a5d3826057f67b9bcb" error="error committing manifest to content store: commit failed: unexpected commit digest sha256:eaee4c452b076cdb05b391ed7e75e1ad0aca136665875ab5d7e2f3d9f4675769, expected sha256:5d99aa1120524c801bc8c1a7077e8f5ec122ba16b6dda1a5d3826057f67b9bcb: failed precondition" remote="registry.k8s.io/echoserver:1.4"
	Dec 13 09:35:44 default-k8s-diff-port-018953 dockerd[1181]: time="2025-12-13T09:35:44.500664801Z" level=info msg="Attempting next endpoint for pull after error: Docker Image Format v1 and Docker Image manifest version 2, schema 1 support has been removed. Suggest the author of registry.k8s.io/echoserver:1.4 to upgrade the image to the OCI Format or Docker Image manifest v2, schema 2. More information at https://docs.docker.com/go/deprecated-image-specs/"
	Dec 13 09:35:44 default-k8s-diff-port-018953 cri-dockerd[1562]: time="2025-12-13T09:35:44Z" level=info msg="Stop pulling image registry.k8s.io/echoserver:1.4: 1.4: Pulling from echoserver"
	Dec 13 09:35:44 default-k8s-diff-port-018953 dockerd[1181]: time="2025-12-13T09:35:44.739624053Z" level=warning msg="reference for unknown type: " digest="sha256:2e500d29e9d5f4a086b908eb8dfe7ecac57d2ab09d65b24f588b1d449841ef93" remote="docker.io/kubernetesui/dashboard@sha256:2e500d29e9d5f4a086b908eb8dfe7ecac57d2ab09d65b24f588b1d449841ef93"
	Dec 13 09:35:52 default-k8s-diff-port-018953 cri-dockerd[1562]: time="2025-12-13T09:35:52Z" level=info msg="Stop pulling image docker.io/kubernetesui/dashboard:v2.7.0@sha256:2e500d29e9d5f4a086b908eb8dfe7ecac57d2ab09d65b24f588b1d449841ef93: Status: Downloaded newer image for kubernetesui/dashboard@sha256:2e500d29e9d5f4a086b908eb8dfe7ecac57d2ab09d65b24f588b1d449841ef93"
	Dec 13 09:35:55 default-k8s-diff-port-018953 dockerd[1181]: time="2025-12-13T09:35:55.231255405Z" level=warning msg="Error getting v2 registry: Get \"https://fake.domain/v2/\": dial tcp: lookup fake.domain on 192.168.122.1:53: no such host"
	Dec 13 09:35:55 default-k8s-diff-port-018953 dockerd[1181]: time="2025-12-13T09:35:55.231601592Z" level=info msg="Attempting next endpoint for pull after error: Get \"https://fake.domain/v2/\": dial tcp: lookup fake.domain on 192.168.122.1:53: no such host"
	Dec 13 09:35:55 default-k8s-diff-port-018953 dockerd[1181]: time="2025-12-13T09:35:55.237473779Z" level=error msg="unexpected HTTP error handling" error="<nil>"
	Dec 13 09:35:55 default-k8s-diff-port-018953 dockerd[1181]: time="2025-12-13T09:35:55.237863871Z" level=error msg="Handler for POST /v1.46/images/create returned error: Get \"https://fake.domain/v2/\": dial tcp: lookup fake.domain on 192.168.122.1:53: no such host"
	Dec 13 09:35:57 default-k8s-diff-port-018953 dockerd[1181]: time="2025-12-13T09:35:57.336744894Z" level=warning msg="reference for unknown type: application/vnd.docker.distribution.manifest.v1+prettyjws" digest="sha256:5d99aa1120524c801bc8c1a7077e8f5ec122ba16b6dda1a5d3826057f67b9bcb" remote="registry.k8s.io/echoserver:1.4"
	Dec 13 09:35:57 default-k8s-diff-port-018953 dockerd[1181]: time="2025-12-13T09:35:57.413925743Z" level=warning msg="Error persisting manifest" digest="sha256:5d99aa1120524c801bc8c1a7077e8f5ec122ba16b6dda1a5d3826057f67b9bcb" error="error committing manifest to content store: commit failed: unexpected commit digest sha256:eaee4c452b076cdb05b391ed7e75e1ad0aca136665875ab5d7e2f3d9f4675769, expected sha256:5d99aa1120524c801bc8c1a7077e8f5ec122ba16b6dda1a5d3826057f67b9bcb: failed precondition" remote="registry.k8s.io/echoserver:1.4"
	Dec 13 09:35:57 default-k8s-diff-port-018953 dockerd[1181]: time="2025-12-13T09:35:57.414081115Z" level=info msg="Attempting next endpoint for pull after error: Docker Image Format v1 and Docker Image manifest version 2, schema 1 support has been removed. Suggest the author of registry.k8s.io/echoserver:1.4 to upgrade the image to the OCI Format or Docker Image manifest v2, schema 2. More information at https://docs.docker.com/go/deprecated-image-specs/"
	Dec 13 09:35:57 default-k8s-diff-port-018953 cri-dockerd[1562]: time="2025-12-13T09:35:57Z" level=info msg="Stop pulling image registry.k8s.io/echoserver:1.4: 1.4: Pulling from echoserver"
	Dec 13 09:36:06 default-k8s-diff-port-018953 dockerd[1181]: time="2025-12-13T09:36:06.768498035Z" level=info msg="ignoring event" container=aae5aab2e50bdc9408b701e65fed0ce7ed1adc3ead912c1e8d8038523e63d827 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Dec 13 09:36:41 default-k8s-diff-port-018953 cri-dockerd[1562]: time="2025-12-13T09:36:41Z" level=info msg="Failed to read pod IP from plugin/docker: networkPlugin cni failed on the status hook for pod \"coredns-66bc5c9577-g6zqx_kube-system\": CNI failed to retrieve network namespace path: cannot find network namespace for the terminated container \"fb14dd0254c8a062a26d1c7681fafeb8ca5202bcd8b02460c1e2e2548aa86f10\""
	Dec 13 09:36:41 default-k8s-diff-port-018953 cri-dockerd[1562]: time="2025-12-13T09:36:41Z" level=info msg="Docker cri received runtime config &RuntimeConfig{NetworkConfig:&NetworkConfig{PodCidr:10.244.0.0/24,},}"
	Dec 13 09:36:42 default-k8s-diff-port-018953 dockerd[1181]: time="2025-12-13T09:36:42.378414744Z" level=warning msg="reference for unknown type: application/vnd.docker.distribution.manifest.v1+prettyjws" digest="sha256:5d99aa1120524c801bc8c1a7077e8f5ec122ba16b6dda1a5d3826057f67b9bcb" remote="registry.k8s.io/echoserver:1.4"
	Dec 13 09:36:42 default-k8s-diff-port-018953 dockerd[1181]: time="2025-12-13T09:36:42.541550846Z" level=warning msg="Error persisting manifest" digest="sha256:5d99aa1120524c801bc8c1a7077e8f5ec122ba16b6dda1a5d3826057f67b9bcb" error="error committing manifest to content store: commit failed: unexpected commit digest sha256:eaee4c452b076cdb05b391ed7e75e1ad0aca136665875ab5d7e2f3d9f4675769, expected sha256:5d99aa1120524c801bc8c1a7077e8f5ec122ba16b6dda1a5d3826057f67b9bcb: failed precondition" remote="registry.k8s.io/echoserver:1.4"
	Dec 13 09:36:42 default-k8s-diff-port-018953 dockerd[1181]: time="2025-12-13T09:36:42.541669429Z" level=info msg="Attempting next endpoint for pull after error: Docker Image Format v1 and Docker Image manifest version 2, schema 1 support has been removed. Suggest the author of registry.k8s.io/echoserver:1.4 to upgrade the image to the OCI Format or Docker Image manifest v2, schema 2. More information at https://docs.docker.com/go/deprecated-image-specs/"
	Dec 13 09:36:42 default-k8s-diff-port-018953 cri-dockerd[1562]: time="2025-12-13T09:36:42Z" level=info msg="Stop pulling image registry.k8s.io/echoserver:1.4: 1.4: Pulling from echoserver"
	Dec 13 09:36:42 default-k8s-diff-port-018953 dockerd[1181]: time="2025-12-13T09:36:42.577508938Z" level=warning msg="Error getting v2 registry: Get \"https://fake.domain/v2/\": dial tcp: lookup fake.domain on 192.168.122.1:53: no such host"
	Dec 13 09:36:42 default-k8s-diff-port-018953 dockerd[1181]: time="2025-12-13T09:36:42.577555368Z" level=info msg="Attempting next endpoint for pull after error: Get \"https://fake.domain/v2/\": dial tcp: lookup fake.domain on 192.168.122.1:53: no such host"
	Dec 13 09:36:42 default-k8s-diff-port-018953 dockerd[1181]: time="2025-12-13T09:36:42.582796110Z" level=error msg="unexpected HTTP error handling" error="<nil>"
	Dec 13 09:36:42 default-k8s-diff-port-018953 dockerd[1181]: time="2025-12-13T09:36:42.582861384Z" level=error msg="Handler for POST /v1.46/images/create returned error: Get \"https://fake.domain/v2/\": dial tcp: lookup fake.domain on 192.168.122.1:53: no such host"
	
	
	==> container status <==
	CONTAINER           IMAGE                                                                                                 CREATED              STATE               NAME                      ATTEMPT             POD ID              POD                                                    NAMESPACE
	a17e1cb8b1325       6e38f40d628db                                                                                         3 seconds ago        Running             storage-provisioner       2                   d95370cb4e448       storage-provisioner                                    kube-system
	9113189d0c04e       kubernetesui/dashboard@sha256:2e500d29e9d5f4a086b908eb8dfe7ecac57d2ab09d65b24f588b1d449841ef93        53 seconds ago       Running             kubernetes-dashboard      0                   c7ad4ecefcb75       kubernetes-dashboard-855c9754f9-p2brd                  kubernetes-dashboard
	5f700aa530657       56cc512116c8f                                                                                         About a minute ago   Running             busybox                   1                   ac125119b9779       busybox                                                default
	2b439f04f5fc3       52546a367cc9e                                                                                         About a minute ago   Running             coredns                   1                   5bbb8e3707181       coredns-66bc5c9577-8lnrn                               kube-system
	aae5aab2e50bd       6e38f40d628db                                                                                         About a minute ago   Exited              storage-provisioner       1                   d95370cb4e448       storage-provisioner                                    kube-system
	dca507ceebdbc       8aa150647e88a                                                                                         About a minute ago   Running             kube-proxy                1                   6d67df135cd3f       kube-proxy-bjk4k                                       kube-system
	914937023d7a6       88320b5498ff2                                                                                         About a minute ago   Running             kube-scheduler            1                   74c50b87c8c62       kube-scheduler-default-k8s-diff-port-018953            kube-system
	64509f24cd6b6       a3e246e9556e9                                                                                         About a minute ago   Running             etcd                      1                   598b601dba63a       etcd-default-k8s-diff-port-018953                      kube-system
	a16e34261234d       a5f569d49a979                                                                                         About a minute ago   Running             kube-apiserver            1                   0a8e2bae44573       kube-apiserver-default-k8s-diff-port-018953            kube-system
	ac89bd32a6231       01e8bacf0f500                                                                                         About a minute ago   Running             kube-controller-manager   1                   81ece62c71ee5       kube-controller-manager-default-k8s-diff-port-018953   kube-system
	d3a0413e34a4d       gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e   2 minutes ago        Exited              busybox                   0                   fd18283ce6f4d       busybox                                                default
	53a66219a044a       52546a367cc9e                                                                                         3 minutes ago        Exited              coredns                   0                   b93881983858e       coredns-66bc5c9577-8lnrn                               kube-system
	dd7e0f10d15aa       8aa150647e88a                                                                                         3 minutes ago        Exited              kube-proxy                0                   a670e4e9f57dc       kube-proxy-bjk4k                                       kube-system
	b6f1305f6ae8c       88320b5498ff2                                                                                         3 minutes ago        Exited              kube-scheduler            0                   8737390be5970       kube-scheduler-default-k8s-diff-port-018953            kube-system
	1042acfc9b5d0       a5f569d49a979                                                                                         3 minutes ago        Exited              kube-apiserver            0                   c56aa66442294       kube-apiserver-default-k8s-diff-port-018953            kube-system
	2f91314a94996       01e8bacf0f500                                                                                         3 minutes ago        Exited              kube-controller-manager   0                   e539015ef8b88       kube-controller-manager-default-k8s-diff-port-018953   kube-system
	aef3408be0d4d       a3e246e9556e9                                                                                         3 minutes ago        Exited              etcd                      0                   3c1955e0956eb       etcd-default-k8s-diff-port-018953                      kube-system
	
	
	==> coredns [2b439f04f5fc] <==
	maxprocs: Leaving GOMAXPROCS=2: CPU quota undefined
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = 6e77f21cd6946b87ec86c565e2060aa5d23c02882cb22fd7a321b5e8cd0c8bdafe21968fcff406405707b988b753da21ecd190fe02329f1b569bfa74920cc0fa
	CoreDNS-1.12.1
	linux/amd64, go1.24.1, 707c7c1
	[INFO] 127.0.0.1:36785 - 864 "HINFO IN 6554980673800972526.482035092620239871. udp 56 false 512" NXDOMAIN qr,rd,ra 131 0.186366491s
	
	
	==> coredns [53a66219a044] <==
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[WARNING] plugin/kubernetes: starting server with unsynced Kubernetes API
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = 1b226df79860026c6a52e67daa10d7f0d57ec5b023288ec00c5e05f93523c894564e15b91770d3a07ae1cfbe861d15b37d4a0027e69c546ab112970993a3b03b
	CoreDNS-1.12.1
	linux/amd64, go1.24.1, 707c7c1
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.32.3/tools/cache/reflector.go:251: failed to list *v1.Namespace: Get "https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: i/o timeout
	[ERROR] plugin/kubernetes: Unhandled Error
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.32.3/tools/cache/reflector.go:251: failed to list *v1.EndpointSlice: Get "https://10.96.0.1:443/apis/discovery.k8s.io/v1/endpointslices?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: i/o timeout
	[ERROR] plugin/kubernetes: Unhandled Error
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.32.3/tools/cache/reflector.go:251: failed to list *v1.Service: Get "https://10.96.0.1:443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: i/o timeout
	[ERROR] plugin/kubernetes: Unhandled Error
	[INFO] Reloading
	[INFO] plugin/reload: Running configuration SHA512 = 6e77f21cd6946b87ec86c565e2060aa5d23c02882cb22fd7a321b5e8cd0c8bdafe21968fcff406405707b988b753da21ecd190fe02329f1b569bfa74920cc0fa
	[INFO] Reloading complete
	[INFO] 127.0.0.1:43616 - 2659 "HINFO IN 6498301113996982301.4561126906837148237. udp 57 false 512" NXDOMAIN qr,rd,ra 132 0.12155845s
	[INFO] SIGTERM: Shutting down servers then terminating
	[INFO] plugin/health: Going into lameduck mode for 5s
	
	
	==> describe nodes <==
	Name:               default-k8s-diff-port-018953
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=default-k8s-diff-port-018953
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=fb16b7642350f383695d44d1e88d7327f6f14453
	                    minikube.k8s.io/name=default-k8s-diff-port-018953
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2025_12_13T09_33_36_0700
	                    minikube.k8s.io/version=v1.37.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Sat, 13 Dec 2025 09:33:33 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  default-k8s-diff-port-018953
	  AcquireTime:     <unset>
	  RenewTime:       Sat, 13 Dec 2025 09:36:41 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Sat, 13 Dec 2025 09:36:41 +0000   Sat, 13 Dec 2025 09:33:30 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Sat, 13 Dec 2025 09:36:41 +0000   Sat, 13 Dec 2025 09:33:30 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Sat, 13 Dec 2025 09:36:41 +0000   Sat, 13 Dec 2025 09:33:30 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Sat, 13 Dec 2025 09:36:41 +0000   Sat, 13 Dec 2025 09:35:37 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.50.59
	  Hostname:    default-k8s-diff-port-018953
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             3035912Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             3035912Ki
	  pods:               110
	System Info:
	  Machine ID:                 4398f09173e84f11abbe3edaa9d2c77b
	  System UUID:                4398f091-73e8-4f11-abbe-3edaa9d2c77b
	  Boot ID:                    5588dae1-1009-43b7-ae1d-20ec2f0d5449
	  Kernel Version:             6.6.95
	  OS Image:                   Buildroot 2025.02
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  docker://28.5.2
	  Kubelet Version:            v1.34.2
	  Kube-Proxy Version:         
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (11 in total)
	  Namespace                   Name                                                    CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                                    ------------  ----------  ---------------  -------------  ---
	  default                     busybox                                                 0 (0%)        0 (0%)      0 (0%)           0 (0%)         2m27s
	  kube-system                 coredns-66bc5c9577-8lnrn                                100m (5%)     0 (0%)      70Mi (2%)        170Mi (5%)     3m4s
	  kube-system                 etcd-default-k8s-diff-port-018953                       100m (5%)     0 (0%)      100Mi (3%)       0 (0%)         3m10s
	  kube-system                 kube-apiserver-default-k8s-diff-port-018953             250m (12%)    0 (0%)      0 (0%)           0 (0%)         3m10s
	  kube-system                 kube-controller-manager-default-k8s-diff-port-018953    200m (10%)    0 (0%)      0 (0%)           0 (0%)         3m11s
	  kube-system                 kube-proxy-bjk4k                                        0 (0%)        0 (0%)      0 (0%)           0 (0%)         3m5s
	  kube-system                 kube-scheduler-default-k8s-diff-port-018953             100m (5%)     0 (0%)      0 (0%)           0 (0%)         3m10s
	  kube-system                 metrics-server-746fcd58dc-lbqfl                         100m (5%)     0 (0%)      200Mi (6%)       0 (0%)         2m16s
	  kube-system                 storage-provisioner                                     0 (0%)        0 (0%)      0 (0%)           0 (0%)         3m1s
	  kubernetes-dashboard        dashboard-metrics-scraper-6ffb444bf9-trhxx              0 (0%)        0 (0%)      0 (0%)           0 (0%)         65s
	  kubernetes-dashboard        kubernetes-dashboard-855c9754f9-p2brd                   0 (0%)        0 (0%)      0 (0%)           0 (0%)         65s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests     Limits
	  --------           --------     ------
	  cpu                850m (42%)   0 (0%)
	  memory             370Mi (12%)  170Mi (5%)
	  ephemeral-storage  0 (0%)       0 (0%)
	  hugepages-2Mi      0 (0%)       0 (0%)
	Events:
	  Type     Reason                   Age                From             Message
	  ----     ------                   ----               ----             -------
	  Normal   Starting                 3m1s               kube-proxy       
	  Normal   Starting                 68s                kube-proxy       
	  Normal   NodeHasNoDiskPressure    3m10s              kubelet          Node default-k8s-diff-port-018953 status is now: NodeHasNoDiskPressure
	  Normal   NodeAllocatableEnforced  3m10s              kubelet          Updated Node Allocatable limit across pods
	  Normal   NodeHasSufficientMemory  3m10s              kubelet          Node default-k8s-diff-port-018953 status is now: NodeHasSufficientMemory
	  Normal   NodeHasSufficientPID     3m10s              kubelet          Node default-k8s-diff-port-018953 status is now: NodeHasSufficientPID
	  Normal   Starting                 3m10s              kubelet          Starting kubelet.
	  Normal   NodeReady                3m7s               kubelet          Node default-k8s-diff-port-018953 status is now: NodeReady
	  Normal   RegisteredNode           3m6s               node-controller  Node default-k8s-diff-port-018953 event: Registered Node default-k8s-diff-port-018953 in Controller
	  Normal   NodeHasNoDiskPressure    79s (x8 over 79s)  kubelet          Node default-k8s-diff-port-018953 status is now: NodeHasNoDiskPressure
	  Normal   NodeHasSufficientMemory  79s (x8 over 79s)  kubelet          Node default-k8s-diff-port-018953 status is now: NodeHasSufficientMemory
	  Normal   Starting                 79s                kubelet          Starting kubelet.
	  Normal   NodeHasSufficientPID     79s (x7 over 79s)  kubelet          Node default-k8s-diff-port-018953 status is now: NodeHasSufficientPID
	  Normal   NodeAllocatableEnforced  79s                kubelet          Updated Node Allocatable limit across pods
	  Warning  Rebooted                 72s                kubelet          Node default-k8s-diff-port-018953 has been rebooted, boot id: 5588dae1-1009-43b7-ae1d-20ec2f0d5449
	  Normal   RegisteredNode           67s                node-controller  Node default-k8s-diff-port-018953 event: Registered Node default-k8s-diff-port-018953 in Controller
	  Normal   Starting                 6s                 kubelet          Starting kubelet.
	  Normal   NodeAllocatableEnforced  5s                 kubelet          Updated Node Allocatable limit across pods
	  Normal   NodeHasSufficientMemory  5s                 kubelet          Node default-k8s-diff-port-018953 status is now: NodeHasSufficientMemory
	  Normal   NodeHasNoDiskPressure    5s                 kubelet          Node default-k8s-diff-port-018953 status is now: NodeHasNoDiskPressure
	  Normal   NodeHasSufficientPID     5s                 kubelet          Node default-k8s-diff-port-018953 status is now: NodeHasSufficientPID
	
	
	==> dmesg <==
	[Dec13 09:35] Booted with the nomodeset parameter. Only the system framebuffer will be available
	[  +0.000010] acpi PNP0A03:00: fail to add MMCONFIG information, can't access extended configuration space under this bridge
	[  +0.001469] platform regulatory.0: Direct firmware load for regulatory.db failed with error -2
	[  +0.004195] (rpcbind)[119]: rpcbind.service: Referenced but unset environment variable evaluates to an empty string: RPCBIND_OPTIONS
	[  +0.876292] NFSD: Using /var/lib/nfs/v4recovery as the NFSv4 state recovery directory
	[  +0.000018] NFSD: unable to find recovery directory /var/lib/nfs/v4recovery
	[  +0.000001] NFSD: Unable to initialize client recovery tracking! (-2)
	[  +0.132505] kauditd_printk_skb: 1 callbacks suppressed
	[  +0.149494] kauditd_printk_skb: 421 callbacks suppressed
	[  +1.761194] kauditd_printk_skb: 73 callbacks suppressed
	[  +7.356585] kauditd_printk_skb: 110 callbacks suppressed
	[  +1.084317] kauditd_printk_skb: 204 callbacks suppressed
	[  +2.570266] kauditd_printk_skb: 183 callbacks suppressed
	[Dec13 09:36] kauditd_printk_skb: 11 callbacks suppressed
	[  +0.295171] kauditd_printk_skb: 35 callbacks suppressed
	
	
	==> etcd [64509f24cd6b] <==
	{"level":"warn","ts":"2025-12-13T09:35:33.097246Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:46348","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-13T09:35:33.124604Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:46358","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-13T09:35:33.164603Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:46386","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-13T09:35:33.220986Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:46398","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-13T09:35:33.248442Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:46412","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-13T09:35:33.284098Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:46428","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-13T09:35:33.294789Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:46450","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-13T09:35:33.314196Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:46470","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-13T09:35:33.420999Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:46490","server-name":"","error":"EOF"}
	{"level":"info","ts":"2025-12-13T09:36:41.604563Z","caller":"traceutil/trace.go:172","msg":"trace[1472656459] transaction","detail":"{read_only:false; response_revision:722; number_of_response:1; }","duration":"136.84303ms","start":"2025-12-13T09:36:41.467700Z","end":"2025-12-13T09:36:41.604544Z","steps":["trace[1472656459] 'process raft request'  (duration: 136.733287ms)"],"step_count":1}
	{"level":"warn","ts":"2025-12-13T09:36:41.605300Z","caller":"txn/util.go:93","msg":"apply request took too long","took":"126.476854ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/limitranges\" limit:1 ","response":"range_response_count:0 size:5"}
	{"level":"info","ts":"2025-12-13T09:36:41.605385Z","caller":"traceutil/trace.go:172","msg":"trace[513915585] range","detail":"{range_begin:/registry/limitranges; range_end:; response_count:0; response_revision:722; }","duration":"126.6191ms","start":"2025-12-13T09:36:41.478747Z","end":"2025-12-13T09:36:41.605366Z","steps":["trace[513915585] 'agreement among raft nodes before linearized reading'  (duration: 126.396483ms)"],"step_count":1}
	{"level":"info","ts":"2025-12-13T09:36:41.731713Z","caller":"traceutil/trace.go:172","msg":"trace[2119032866] linearizableReadLoop","detail":"{readStateIndex:778; appliedIndex:778; }","duration":"102.94709ms","start":"2025-12-13T09:36:41.628522Z","end":"2025-12-13T09:36:41.731469Z","steps":["trace[2119032866] 'read index received'  (duration: 102.936527ms)","trace[2119032866] 'applied index is now lower than readState.Index'  (duration: 9.158µs)"],"step_count":2}
	{"level":"info","ts":"2025-12-13T09:36:41.760605Z","caller":"traceutil/trace.go:172","msg":"trace[2019165356] transaction","detail":"{read_only:false; number_of_response:0; response_revision:722; }","duration":"135.332186ms","start":"2025-12-13T09:36:41.625252Z","end":"2025-12-13T09:36:41.760585Z","steps":["trace[2019165356] 'process raft request'  (duration: 107.054817ms)","trace[2019165356] 'compare'  (duration: 28.228942ms)"],"step_count":2}
	{"level":"warn","ts":"2025-12-13T09:36:41.761102Z","caller":"txn/util.go:93","msg":"apply request took too long","took":"132.48897ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/services/specs\" limit:1 ","response":"range_response_count:0 size:5"}
	{"level":"info","ts":"2025-12-13T09:36:41.761145Z","caller":"traceutil/trace.go:172","msg":"trace[545500457] range","detail":"{range_begin:/registry/services/specs; range_end:; response_count:0; response_revision:722; }","duration":"132.614996ms","start":"2025-12-13T09:36:41.628516Z","end":"2025-12-13T09:36:41.761131Z","steps":["trace[545500457] 'agreement among raft nodes before linearized reading'  (duration: 103.335875ms)","trace[545500457] 'range keys from in-memory index tree'  (duration: 29.130872ms)"],"step_count":2}
	{"level":"info","ts":"2025-12-13T09:36:41.856396Z","caller":"traceutil/trace.go:172","msg":"trace[383765451] linearizableReadLoop","detail":"{readStateIndex:779; appliedIndex:779; }","duration":"124.550221ms","start":"2025-12-13T09:36:41.731828Z","end":"2025-12-13T09:36:41.856379Z","steps":["trace[383765451] 'read index received'  (duration: 124.543153ms)","trace[383765451] 'applied index is now lower than readState.Index'  (duration: 6.218µs)"],"step_count":2}
	{"level":"warn","ts":"2025-12-13T09:36:41.856523Z","caller":"txn/util.go:93","msg":"apply request took too long","took":"188.01702ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/apiextensions.k8s.io/customresourcedefinitions\" limit:1 ","response":"range_response_count:0 size:5"}
	{"level":"info","ts":"2025-12-13T09:36:41.856540Z","caller":"traceutil/trace.go:172","msg":"trace[1107816314] range","detail":"{range_begin:/registry/apiextensions.k8s.io/customresourcedefinitions; range_end:; response_count:0; response_revision:722; }","duration":"188.05425ms","start":"2025-12-13T09:36:41.668482Z","end":"2025-12-13T09:36:41.856536Z","steps":["trace[1107816314] 'agreement among raft nodes before linearized reading'  (duration: 187.989836ms)"],"step_count":1}
	{"level":"warn","ts":"2025-12-13T09:36:41.856999Z","caller":"txn/util.go:93","msg":"apply request took too long","took":"224.037555ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/minions/default-k8s-diff-port-018953\" limit:1 ","response":"range_response_count:1 size:4733"}
	{"level":"info","ts":"2025-12-13T09:36:41.857118Z","caller":"traceutil/trace.go:172","msg":"trace[160251017] range","detail":"{range_begin:/registry/minions/default-k8s-diff-port-018953; range_end:; response_count:1; response_revision:722; }","duration":"224.162611ms","start":"2025-12-13T09:36:41.632944Z","end":"2025-12-13T09:36:41.857106Z","steps":["trace[160251017] 'agreement among raft nodes before linearized reading'  (duration: 223.943602ms)"],"step_count":1}
	{"level":"info","ts":"2025-12-13T09:36:41.857893Z","caller":"traceutil/trace.go:172","msg":"trace[1822799367] transaction","detail":"{read_only:false; response_revision:723; number_of_response:1; }","duration":"230.912908ms","start":"2025-12-13T09:36:41.626965Z","end":"2025-12-13T09:36:41.857878Z","steps":["trace[1822799367] 'process raft request'  (duration: 230.535159ms)"],"step_count":1}
	{"level":"info","ts":"2025-12-13T09:36:41.858607Z","caller":"traceutil/trace.go:172","msg":"trace[1952376893] transaction","detail":"{read_only:false; number_of_response:0; response_revision:723; }","duration":"157.009688ms","start":"2025-12-13T09:36:41.701584Z","end":"2025-12-13T09:36:41.858593Z","steps":["trace[1952376893] 'process raft request'  (duration: 156.07451ms)"],"step_count":1}
	{"level":"info","ts":"2025-12-13T09:36:41.858823Z","caller":"traceutil/trace.go:172","msg":"trace[1123472395] transaction","detail":"{read_only:false; number_of_response:0; response_revision:723; }","duration":"224.667593ms","start":"2025-12-13T09:36:41.634147Z","end":"2025-12-13T09:36:41.858814Z","steps":["trace[1123472395] 'process raft request'  (duration: 223.440666ms)"],"step_count":1}
	{"level":"info","ts":"2025-12-13T09:36:41.858954Z","caller":"traceutil/trace.go:172","msg":"trace[655288941] transaction","detail":"{read_only:false; number_of_response:0; response_revision:723; }","duration":"189.020621ms","start":"2025-12-13T09:36:41.669927Z","end":"2025-12-13T09:36:41.858947Z","steps":["trace[655288941] 'process raft request'  (duration: 187.690908ms)"],"step_count":1}
	
	
	==> etcd [aef3408be0d4] <==
	{"level":"info","ts":"2025-12-13T09:33:38.161007Z","caller":"traceutil/trace.go:172","msg":"trace[423360871] range","detail":"{range_begin:/registry/serviceaccounts/kube-system/certificate-controller; range_end:; response_count:1; response_revision:276; }","duration":"201.246792ms","start":"2025-12-13T09:33:37.959749Z","end":"2025-12-13T09:33:38.160996Z","steps":["trace[423360871] 'agreement among raft nodes before linearized reading'  (duration: 149.547964ms)","trace[423360871] 'range keys from in-memory index tree'  (duration: 51.257162ms)"],"step_count":2}
	{"level":"warn","ts":"2025-12-13T09:33:38.346173Z","caller":"txn/util.go:93","msg":"apply request took too long","took":"145.302638ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/namespaces/kube-system\" limit:1 ","response":"range_response_count:1 size:351"}
	{"level":"info","ts":"2025-12-13T09:33:38.346260Z","caller":"traceutil/trace.go:172","msg":"trace[1776984165] range","detail":"{range_begin:/registry/namespaces/kube-system; range_end:; response_count:1; response_revision:277; }","duration":"145.398107ms","start":"2025-12-13T09:33:38.200845Z","end":"2025-12-13T09:33:38.346243Z","steps":["trace[1776984165] 'agreement among raft nodes before linearized reading'  (duration: 84.191909ms)","trace[1776984165] 'range keys from in-memory index tree'  (duration: 60.598183ms)"],"step_count":2}
	{"level":"info","ts":"2025-12-13T09:33:38.346513Z","caller":"traceutil/trace.go:172","msg":"trace[1272861528] transaction","detail":"{read_only:false; response_revision:278; number_of_response:1; }","duration":"149.00589ms","start":"2025-12-13T09:33:38.197469Z","end":"2025-12-13T09:33:38.346475Z","steps":["trace[1272861528] 'process raft request'  (duration: 87.608816ms)","trace[1272861528] 'compare'  (duration: 60.635698ms)"],"step_count":2}
	{"level":"warn","ts":"2025-12-13T09:33:38.608864Z","caller":"txn/util.go:93","msg":"apply request took too long","took":"145.57294ms","expected-duration":"100ms","prefix":"","request":"header:<ID:11123217178011021227 username:\"kube-apiserver-etcd-client\" auth_revision:1 > txn:<compare:<target:MOD key:\"/registry/serviceaccounts/kube-system/job-controller\" mod_revision:0 > success:<request_put:<key:\"/registry/serviceaccounts/kube-system/job-controller\" value_size:119 >> failure:<>>","response":"size:16"}
	{"level":"info","ts":"2025-12-13T09:33:38.608954Z","caller":"traceutil/trace.go:172","msg":"trace[191922514] transaction","detail":"{read_only:false; response_revision:279; number_of_response:1; }","duration":"249.397781ms","start":"2025-12-13T09:33:38.359546Z","end":"2025-12-13T09:33:38.608943Z","steps":["trace[191922514] 'process raft request'  (duration: 103.188496ms)","trace[191922514] 'compare'  (duration: 145.311701ms)"],"step_count":2}
	{"level":"info","ts":"2025-12-13T09:33:38.636561Z","caller":"traceutil/trace.go:172","msg":"trace[1798635197] transaction","detail":"{read_only:false; response_revision:280; number_of_response:1; }","duration":"268.484714ms","start":"2025-12-13T09:33:38.367981Z","end":"2025-12-13T09:33:38.636465Z","steps":["trace[1798635197] 'process raft request'  (duration: 267.17058ms)"],"step_count":1}
	{"level":"info","ts":"2025-12-13T09:34:31.546637Z","caller":"osutil/interrupt_unix.go:65","msg":"received signal; shutting down","signal":"terminated"}
	{"level":"info","ts":"2025-12-13T09:34:31.546760Z","caller":"embed/etcd.go:426","msg":"closing etcd server","name":"default-k8s-diff-port-018953","data-dir":"/var/lib/minikube/etcd","advertise-peer-urls":["https://192.168.50.59:2380"],"advertise-client-urls":["https://192.168.50.59:2379"]}
	{"level":"error","ts":"2025-12-13T09:34:31.546883Z","caller":"embed/etcd.go:912","msg":"setting up serving from embedded etcd failed.","error":"http: Server closed","stacktrace":"go.etcd.io/etcd/server/v3/embed.(*Etcd).errHandler\n\tgo.etcd.io/etcd/server/v3/embed/etcd.go:912\ngo.etcd.io/etcd/server/v3/embed.(*serveCtx).startHandler.func1\n\tgo.etcd.io/etcd/server/v3/embed/serve.go:90"}
	{"level":"error","ts":"2025-12-13T09:34:31.550325Z","caller":"embed/etcd.go:912","msg":"setting up serving from embedded etcd failed.","error":"http: Server closed","stacktrace":"go.etcd.io/etcd/server/v3/embed.(*Etcd).errHandler\n\tgo.etcd.io/etcd/server/v3/embed/etcd.go:912\ngo.etcd.io/etcd/server/v3/embed.(*serveCtx).startHandler.func1\n\tgo.etcd.io/etcd/server/v3/embed/serve.go:90"}
	{"level":"error","ts":"2025-12-13T09:34:38.553571Z","caller":"embed/etcd.go:912","msg":"setting up serving from embedded etcd failed.","error":"accept tcp 127.0.0.1:2381: use of closed network connection","stacktrace":"go.etcd.io/etcd/server/v3/embed.(*Etcd).errHandler\n\tgo.etcd.io/etcd/server/v3/embed/etcd.go:912\ngo.etcd.io/etcd/server/v3/embed.(*Etcd).startHandler.func1\n\tgo.etcd.io/etcd/server/v3/embed/etcd.go:906"}
	{"level":"info","ts":"2025-12-13T09:34:38.553627Z","caller":"etcdserver/server.go:1297","msg":"skipped leadership transfer for single voting member cluster","local-member-id":"602de89049e69a5d","current-leader-member-id":"602de89049e69a5d"}
	{"level":"info","ts":"2025-12-13T09:34:38.553850Z","caller":"etcdserver/server.go:2358","msg":"server has stopped; stopping storage version's monitor"}
	{"level":"info","ts":"2025-12-13T09:34:38.553862Z","caller":"etcdserver/server.go:2335","msg":"server has stopped; stopping cluster version's monitor"}
	{"level":"warn","ts":"2025-12-13T09:34:38.556612Z","caller":"embed/serve.go:245","msg":"stopping secure grpc server due to error","error":"accept tcp 127.0.0.1:2379: use of closed network connection"}
	{"level":"warn","ts":"2025-12-13T09:34:38.556706Z","caller":"embed/serve.go:247","msg":"stopped secure grpc server due to error","error":"accept tcp 127.0.0.1:2379: use of closed network connection"}
	{"level":"error","ts":"2025-12-13T09:34:38.556718Z","caller":"embed/etcd.go:912","msg":"setting up serving from embedded etcd failed.","error":"accept tcp 127.0.0.1:2379: use of closed network connection","stacktrace":"go.etcd.io/etcd/server/v3/embed.(*Etcd).errHandler\n\tgo.etcd.io/etcd/server/v3/embed/etcd.go:912\ngo.etcd.io/etcd/server/v3/embed.(*Etcd).startHandler.func1\n\tgo.etcd.io/etcd/server/v3/embed/etcd.go:906"}
	{"level":"warn","ts":"2025-12-13T09:34:38.556757Z","caller":"embed/serve.go:245","msg":"stopping secure grpc server due to error","error":"accept tcp 192.168.50.59:2379: use of closed network connection"}
	{"level":"warn","ts":"2025-12-13T09:34:38.556765Z","caller":"embed/serve.go:247","msg":"stopped secure grpc server due to error","error":"accept tcp 192.168.50.59:2379: use of closed network connection"}
	{"level":"error","ts":"2025-12-13T09:34:38.556770Z","caller":"embed/etcd.go:912","msg":"setting up serving from embedded etcd failed.","error":"accept tcp 192.168.50.59:2379: use of closed network connection","stacktrace":"go.etcd.io/etcd/server/v3/embed.(*Etcd).errHandler\n\tgo.etcd.io/etcd/server/v3/embed/etcd.go:912\ngo.etcd.io/etcd/server/v3/embed.(*Etcd).startHandler.func1\n\tgo.etcd.io/etcd/server/v3/embed/etcd.go:906"}
	{"level":"info","ts":"2025-12-13T09:34:38.562282Z","caller":"embed/etcd.go:621","msg":"stopping serving peer traffic","address":"192.168.50.59:2380"}
	{"level":"error","ts":"2025-12-13T09:34:38.562408Z","caller":"embed/etcd.go:912","msg":"setting up serving from embedded etcd failed.","error":"accept tcp 192.168.50.59:2380: use of closed network connection","stacktrace":"go.etcd.io/etcd/server/v3/embed.(*Etcd).errHandler\n\tgo.etcd.io/etcd/server/v3/embed/etcd.go:912\ngo.etcd.io/etcd/server/v3/embed.(*Etcd).startHandler.func1\n\tgo.etcd.io/etcd/server/v3/embed/etcd.go:906"}
	{"level":"info","ts":"2025-12-13T09:34:38.562438Z","caller":"embed/etcd.go:626","msg":"stopped serving peer traffic","address":"192.168.50.59:2380"}
	{"level":"info","ts":"2025-12-13T09:34:38.562445Z","caller":"embed/etcd.go:428","msg":"closed etcd server","name":"default-k8s-diff-port-018953","data-dir":"/var/lib/minikube/etcd","advertise-peer-urls":["https://192.168.50.59:2380"],"advertise-client-urls":["https://192.168.50.59:2379"]}
	
	
	==> kernel <==
	 09:36:46 up 1 min,  0 users,  load average: 1.22, 0.57, 0.21
	Linux default-k8s-diff-port-018953 6.6.95 #1 SMP PREEMPT_DYNAMIC Thu Dec 11 23:11:39 UTC 2025 x86_64 GNU/Linux
	PRETTY_NAME="Buildroot 2025.02"
	
	
	==> kube-apiserver [1042acfc9b5d] <==
	W1213 09:34:40.622822       1 logging.go:55] [core] [Channel #79 SubChannel #81]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1213 09:34:40.624325       1 logging.go:55] [core] [Channel #71 SubChannel #73]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1213 09:34:40.661871       1 logging.go:55] [core] [Channel #231 SubChannel #233]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1213 09:34:40.672549       1 logging.go:55] [core] [Channel #75 SubChannel #77]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1213 09:34:40.730084       1 logging.go:55] [core] [Channel #47 SubChannel #49]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1213 09:34:40.736885       1 logging.go:55] [core] [Channel #107 SubChannel #109]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1213 09:34:40.756033       1 logging.go:55] [core] [Channel #83 SubChannel #85]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1213 09:34:40.780475       1 logging.go:55] [core] [Channel #43 SubChannel #45]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1213 09:34:40.826049       1 logging.go:55] [core] [Channel #147 SubChannel #149]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1213 09:34:40.837920       1 logging.go:55] [core] [Channel #247 SubChannel #249]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1213 09:34:40.932683       1 logging.go:55] [core] [Channel #4 SubChannel #6]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1213 09:34:40.932769       1 logging.go:55] [core] [Channel #135 SubChannel #137]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1213 09:34:40.952101       1 logging.go:55] [core] [Channel #175 SubChannel #177]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1213 09:34:41.143618       1 logging.go:55] [core] [Channel #187 SubChannel #189]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1213 09:34:41.182743       1 logging.go:55] [core] [Channel #35 SubChannel #37]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1213 09:34:41.196680       1 logging.go:55] [core] [Channel #195 SubChannel #197]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1213 09:34:41.205755       1 logging.go:55] [core] [Channel #91 SubChannel #93]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1213 09:34:41.229454       1 logging.go:55] [core] [Channel #155 SubChannel #157]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1213 09:34:41.233434       1 logging.go:55] [core] [Channel #27 SubChannel #29]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1213 09:34:41.256087       1 logging.go:55] [core] [Channel #207 SubChannel #209]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1213 09:34:41.282714       1 logging.go:55] [core] [Channel #251 SubChannel #253]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1213 09:34:41.303983       1 logging.go:55] [core] [Channel #171 SubChannel #173]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1213 09:34:41.517596       1 logging.go:55] [core] [Channel #20 SubChannel #22]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1213 09:34:41.653186       1 logging.go:55] [core] [Channel #67 SubChannel #69]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1213 09:34:41.672603       1 logging.go:55] [core] [Channel #223 SubChannel #225]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	
	
	==> kube-apiserver [a16e34261234] <==
		, Header: map[Content-Type:[text/plain; charset=utf-8] X-Content-Type-Options:[nosniff]]
	 > logger="UnhandledError"
	I1213 09:35:35.792541       1 controller.go:109] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	W1213 09:35:35.792921       1 handler_proxy.go:99] no RequestInfo found in the context
	E1213 09:35:35.793174       1 controller.go:113] "Unhandled Error" err="loading OpenAPI spec for \"v1beta1.metrics.k8s.io\" failed with: Error, could not get list of group versions for APIService" logger="UnhandledError"
	I1213 09:35:35.794093       1 controller.go:126] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	I1213 09:35:37.097294       1 controller.go:667] quota admission added evaluator for: deployments.apps
	I1213 09:35:37.176751       1 controller.go:667] quota admission added evaluator for: daemonsets.apps
	I1213 09:35:37.227745       1 controller.go:667] quota admission added evaluator for: roles.rbac.authorization.k8s.io
	I1213 09:35:37.272717       1 controller.go:667] quota admission added evaluator for: rolebindings.rbac.authorization.k8s.io
	I1213 09:35:39.196593       1 controller.go:667] quota admission added evaluator for: replicasets.apps
	I1213 09:35:39.355391       1 controller.go:667] quota admission added evaluator for: endpoints
	I1213 09:35:39.420125       1 controller.go:667] quota admission added evaluator for: endpointslices.discovery.k8s.io
	I1213 09:35:40.658511       1 controller.go:667] quota admission added evaluator for: namespaces
	I1213 09:35:41.272672       1 alloc.go:328] "allocated clusterIPs" service="kubernetes-dashboard/kubernetes-dashboard" clusterIPs={"IPv4":"10.99.93.220"}
	I1213 09:35:41.301717       1 alloc.go:328] "allocated clusterIPs" service="kubernetes-dashboard/dashboard-metrics-scraper" clusterIPs={"IPv4":"10.99.12.133"}
	W1213 09:36:40.177828       1 handler_proxy.go:99] no RequestInfo found in the context
	E1213 09:36:40.182971       1 controller.go:102] "Unhandled Error" err=<
		loading OpenAPI spec for "v1beta1.metrics.k8s.io" failed with: failed to download v1beta1.metrics.k8s.io: failed to retrieve openAPI spec, http error: ResponseCode: 503, Body: service unavailable
		, Header: map[Content-Type:[text/plain; charset=utf-8] X-Content-Type-Options:[nosniff]]
	 > logger="UnhandledError"
	I1213 09:36:40.185715       1 controller.go:109] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	W1213 09:36:40.186493       1 handler_proxy.go:99] no RequestInfo found in the context
	E1213 09:36:40.186530       1 controller.go:113] "Unhandled Error" err="loading OpenAPI spec for \"v1beta1.metrics.k8s.io\" failed with: Error, could not get list of group versions for APIService" logger="UnhandledError"
	I1213 09:36:40.186807       1 controller.go:126] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	
	
	==> kube-controller-manager [2f91314a9499] <==
	I1213 09:33:40.950295       1 shared_informer.go:356] "Caches are synced" controller="ReplicationController"
	I1213 09:33:40.956871       1 shared_informer.go:356] "Caches are synced" controller="garbage collector"
	I1213 09:33:40.958115       1 shared_informer.go:356] "Caches are synced" controller="HPA"
	I1213 09:33:40.958323       1 shared_informer.go:356] "Caches are synced" controller="VAC protection"
	I1213 09:33:40.966170       1 shared_informer.go:356] "Caches are synced" controller="ReplicaSet"
	I1213 09:33:40.966183       1 shared_informer.go:356] "Caches are synced" controller="certificate-csrapproving"
	I1213 09:33:40.966391       1 shared_informer.go:356] "Caches are synced" controller="TTL after finished"
	I1213 09:33:40.967667       1 shared_informer.go:356] "Caches are synced" controller="ephemeral"
	I1213 09:33:40.968920       1 shared_informer.go:356] "Caches are synced" controller="legacy-service-account-token-cleaner"
	I1213 09:33:40.968953       1 shared_informer.go:356] "Caches are synced" controller="endpoint"
	I1213 09:33:40.972144       1 shared_informer.go:356] "Caches are synced" controller="expand"
	I1213 09:33:40.972199       1 shared_informer.go:356] "Caches are synced" controller="service account"
	I1213 09:33:40.972742       1 shared_informer.go:356] "Caches are synced" controller="disruption"
	I1213 09:33:40.977153       1 shared_informer.go:356] "Caches are synced" controller="taint-eviction-controller"
	I1213 09:33:40.977253       1 shared_informer.go:356] "Caches are synced" controller="attach detach"
	I1213 09:33:40.979526       1 shared_informer.go:356] "Caches are synced" controller="PVC protection"
	I1213 09:33:40.982992       1 shared_informer.go:356] "Caches are synced" controller="resource quota"
	I1213 09:33:40.983081       1 shared_informer.go:356] "Caches are synced" controller="GC"
	I1213 09:33:40.998693       1 shared_informer.go:356] "Caches are synced" controller="node"
	I1213 09:33:40.999070       1 shared_informer.go:356] "Caches are synced" controller="namespace"
	I1213 09:33:40.999177       1 range_allocator.go:177] "Sending events to api server" logger="node-ipam-controller"
	I1213 09:33:40.999329       1 range_allocator.go:183] "Starting range CIDR allocator" logger="node-ipam-controller"
	I1213 09:33:40.999443       1 shared_informer.go:349] "Waiting for caches to sync" controller="cidrallocator"
	I1213 09:33:40.999453       1 shared_informer.go:356] "Caches are synced" controller="cidrallocator"
	I1213 09:33:41.012729       1 range_allocator.go:428] "Set node PodCIDR" logger="node-ipam-controller" node="default-k8s-diff-port-018953" podCIDRs=["10.244.0.0/24"]
	
	
	==> kube-controller-manager [ac89bd32a623] <==
	I1213 09:35:39.162655       1 shared_informer.go:349] "Waiting for caches to sync" controller="garbage collector"
	I1213 09:35:39.162976       1 shared_informer.go:356] "Caches are synced" controller="resource quota"
	I1213 09:35:39.171295       1 shared_informer.go:356] "Caches are synced" controller="validatingadmissionpolicy-status"
	I1213 09:35:39.174249       1 shared_informer.go:356] "Caches are synced" controller="legacy-service-account-token-cleaner"
	I1213 09:35:39.174268       1 shared_informer.go:356] "Caches are synced" controller="persistent volume"
	I1213 09:35:39.179131       1 shared_informer.go:356] "Caches are synced" controller="TTL"
	I1213 09:35:39.184147       1 shared_informer.go:356] "Caches are synced" controller="PVC protection"
	I1213 09:35:39.186918       1 shared_informer.go:356] "Caches are synced" controller="ClusterRoleAggregator"
	I1213 09:35:39.204596       1 shared_informer.go:356] "Caches are synced" controller="resource quota"
	I1213 09:35:39.204652       1 shared_informer.go:356] "Caches are synced" controller="job"
	I1213 09:35:39.207228       1 shared_informer.go:356] "Caches are synced" controller="deployment"
	I1213 09:35:39.225600       1 shared_informer.go:356] "Caches are synced" controller="daemon sets"
	I1213 09:35:39.405213       1 shared_informer.go:356] "Caches are synced" controller="garbage collector"
	I1213 09:35:39.405236       1 garbagecollector.go:154] "Garbage collector: all resource monitors have synced" logger="garbage-collector-controller"
	I1213 09:35:39.405243       1 garbagecollector.go:157] "Proceeding to collect garbage" logger="garbage-collector-controller"
	I1213 09:35:39.463258       1 shared_informer.go:356] "Caches are synced" controller="garbage collector"
	E1213 09:35:40.828492       1 replica_set.go:587] "Unhandled Error" err="sync \"kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9\" failed with pods \"dashboard-metrics-scraper-6ffb444bf9-\" is forbidden: error looking up service account kubernetes-dashboard/kubernetes-dashboard: serviceaccount \"kubernetes-dashboard\" not found" logger="UnhandledError"
	E1213 09:35:40.868395       1 replica_set.go:587] "Unhandled Error" err="sync \"kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9\" failed with pods \"dashboard-metrics-scraper-6ffb444bf9-\" is forbidden: error looking up service account kubernetes-dashboard/kubernetes-dashboard: serviceaccount \"kubernetes-dashboard\" not found" logger="UnhandledError"
	E1213 09:35:40.920353       1 replica_set.go:587] "Unhandled Error" err="sync \"kubernetes-dashboard/kubernetes-dashboard-855c9754f9\" failed with pods \"kubernetes-dashboard-855c9754f9-\" is forbidden: error looking up service account kubernetes-dashboard/kubernetes-dashboard: serviceaccount \"kubernetes-dashboard\" not found" logger="UnhandledError"
	E1213 09:35:40.927772       1 replica_set.go:587] "Unhandled Error" err="sync \"kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9\" failed with pods \"dashboard-metrics-scraper-6ffb444bf9-\" is forbidden: error looking up service account kubernetes-dashboard/kubernetes-dashboard: serviceaccount \"kubernetes-dashboard\" not found" logger="UnhandledError"
	E1213 09:35:40.954597       1 replica_set.go:587] "Unhandled Error" err="sync \"kubernetes-dashboard/kubernetes-dashboard-855c9754f9\" failed with pods \"kubernetes-dashboard-855c9754f9-\" is forbidden: error looking up service account kubernetes-dashboard/kubernetes-dashboard: serviceaccount \"kubernetes-dashboard\" not found" logger="UnhandledError"
	E1213 09:35:41.010644       1 replica_set.go:587] "Unhandled Error" err="sync \"kubernetes-dashboard/kubernetes-dashboard-855c9754f9\" failed with pods \"kubernetes-dashboard-855c9754f9-\" is forbidden: error looking up service account kubernetes-dashboard/kubernetes-dashboard: serviceaccount \"kubernetes-dashboard\" not found" logger="UnhandledError"
	E1213 09:35:41.011188       1 replica_set.go:587] "Unhandled Error" err="sync \"kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9\" failed with pods \"dashboard-metrics-scraper-6ffb444bf9-\" is forbidden: error looking up service account kubernetes-dashboard/kubernetes-dashboard: serviceaccount \"kubernetes-dashboard\" not found" logger="UnhandledError"
	I1213 09:36:40.263544       1 garbagecollector.go:787] "failed to discover some groups" logger="garbage-collector-controller" groups="map[\"metrics.k8s.io/v1beta1\":\"stale GroupVersion discovery: metrics.k8s.io/v1beta1\"]"
	E1213 09:36:40.276548       1 resource_quota_controller.go:446] "Unhandled Error" err="unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1" logger="UnhandledError"
	
	
	==> kube-proxy [dca507ceebdb] <==
	I1213 09:35:37.040949       1 shared_informer.go:349] "Waiting for caches to sync" controller="node informer cache"
	I1213 09:35:37.141659       1 shared_informer.go:356] "Caches are synced" controller="node informer cache"
	I1213 09:35:37.144469       1 server.go:219] "Successfully retrieved NodeIPs" NodeIPs=["192.168.50.59"]
	E1213 09:35:37.146002       1 server.go:256] "Kube-proxy configuration may be incomplete or incorrect" err="nodePortAddresses is unset; NodePort connections will be accepted on all local IPs. Consider using `--nodeport-addresses primary`"
	I1213 09:35:37.358849       1 server_linux.go:103] "No iptables support for family" ipFamily="IPv6" error=<
		error listing chain "POSTROUTING" in table "nat": exit status 3: ip6tables v1.8.9 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
		Perhaps ip6tables or your kernel needs to be upgraded.
	 >
	I1213 09:35:37.358979       1 server.go:267] "kube-proxy running in single-stack mode" ipFamily="IPv4"
	I1213 09:35:37.360125       1 server_linux.go:132] "Using iptables Proxier"
	I1213 09:35:37.387908       1 proxier.go:242] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses" ipFamily="IPv4"
	I1213 09:35:37.391056       1 server.go:527] "Version info" version="v1.34.2"
	I1213 09:35:37.391095       1 server.go:529] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1213 09:35:37.403617       1 config.go:200] "Starting service config controller"
	I1213 09:35:37.403649       1 shared_informer.go:349] "Waiting for caches to sync" controller="service config"
	I1213 09:35:37.404430       1 config.go:106] "Starting endpoint slice config controller"
	I1213 09:35:37.404450       1 shared_informer.go:349] "Waiting for caches to sync" controller="endpoint slice config"
	I1213 09:35:37.404570       1 config.go:403] "Starting serviceCIDR config controller"
	I1213 09:35:37.404575       1 shared_informer.go:349] "Waiting for caches to sync" controller="serviceCIDR config"
	I1213 09:35:37.418884       1 config.go:309] "Starting node config controller"
	I1213 09:35:37.425706       1 shared_informer.go:349] "Waiting for caches to sync" controller="node config"
	I1213 09:35:37.425925       1 shared_informer.go:356] "Caches are synced" controller="node config"
	I1213 09:35:37.504639       1 shared_informer.go:356] "Caches are synced" controller="serviceCIDR config"
	I1213 09:35:37.504639       1 shared_informer.go:356] "Caches are synced" controller="service config"
	I1213 09:35:37.504659       1 shared_informer.go:356] "Caches are synced" controller="endpoint slice config"
	
	
	==> kube-proxy [dd7e0f10d15a] <==
	I1213 09:33:43.815907       1 shared_informer.go:349] "Waiting for caches to sync" controller="node informer cache"
	I1213 09:33:43.916581       1 shared_informer.go:356] "Caches are synced" controller="node informer cache"
	I1213 09:33:43.916634       1 server.go:219] "Successfully retrieved NodeIPs" NodeIPs=["192.168.50.59"]
	E1213 09:33:43.916761       1 server.go:256] "Kube-proxy configuration may be incomplete or incorrect" err="nodePortAddresses is unset; NodePort connections will be accepted on all local IPs. Consider using `--nodeport-addresses primary`"
	I1213 09:33:44.084469       1 server_linux.go:103] "No iptables support for family" ipFamily="IPv6" error=<
		error listing chain "POSTROUTING" in table "nat": exit status 3: ip6tables v1.8.9 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
		Perhaps ip6tables or your kernel needs to be upgraded.
	 >
	I1213 09:33:44.084804       1 server.go:267] "kube-proxy running in single-stack mode" ipFamily="IPv4"
	I1213 09:33:44.085051       1 server_linux.go:132] "Using iptables Proxier"
	I1213 09:33:44.146236       1 proxier.go:242] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses" ipFamily="IPv4"
	I1213 09:33:44.148849       1 server.go:527] "Version info" version="v1.34.2"
	I1213 09:33:44.148893       1 server.go:529] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1213 09:33:44.174699       1 config.go:200] "Starting service config controller"
	I1213 09:33:44.186845       1 shared_informer.go:349] "Waiting for caches to sync" controller="service config"
	I1213 09:33:44.177011       1 config.go:403] "Starting serviceCIDR config controller"
	I1213 09:33:44.199241       1 shared_informer.go:349] "Waiting for caches to sync" controller="serviceCIDR config"
	I1213 09:33:44.177199       1 config.go:309] "Starting node config controller"
	I1213 09:33:44.199731       1 shared_informer.go:349] "Waiting for caches to sync" controller="node config"
	I1213 09:33:44.199742       1 shared_informer.go:356] "Caches are synced" controller="node config"
	I1213 09:33:44.176999       1 config.go:106] "Starting endpoint slice config controller"
	I1213 09:33:44.199753       1 shared_informer.go:349] "Waiting for caches to sync" controller="endpoint slice config"
	I1213 09:33:44.300748       1 shared_informer.go:356] "Caches are synced" controller="endpoint slice config"
	I1213 09:33:44.300859       1 shared_informer.go:356] "Caches are synced" controller="serviceCIDR config"
	I1213 09:33:44.324653       1 shared_informer.go:356] "Caches are synced" controller="service config"
	
	
	==> kube-scheduler [914937023d7a] <==
	I1213 09:35:33.551899       1 serving.go:386] Generated self-signed cert in-memory
	I1213 09:35:35.483371       1 server.go:175] "Starting Kubernetes Scheduler" version="v1.34.2"
	I1213 09:35:35.483423       1 server.go:177] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1213 09:35:35.542688       1 requestheader_controller.go:180] Starting RequestHeaderAuthRequestController
	I1213 09:35:35.543148       1 shared_informer.go:349] "Waiting for caches to sync" controller="RequestHeaderAuthRequestController"
	I1213 09:35:35.543463       1 configmap_cafile_content.go:205] "Starting controller" name="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I1213 09:35:35.544496       1 shared_informer.go:349] "Waiting for caches to sync" controller="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I1213 09:35:35.544750       1 configmap_cafile_content.go:205] "Starting controller" name="client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file"
	I1213 09:35:35.544814       1 shared_informer.go:349] "Waiting for caches to sync" controller="client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file"
	I1213 09:35:35.556538       1 secure_serving.go:211] Serving securely on 127.0.0.1:10259
	I1213 09:35:35.557833       1 tlsconfig.go:243] "Starting DynamicServingCertificateController"
	I1213 09:35:35.647442       1 shared_informer.go:356] "Caches are synced" controller="client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file"
	I1213 09:35:35.683573       1 shared_informer.go:356] "Caches are synced" controller="RequestHeaderAuthRequestController"
	I1213 09:35:35.683720       1 shared_informer.go:356] "Caches are synced" controller="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	
	
	==> kube-scheduler [b6f1305f6ae8] <==
	E1213 09:33:33.891539       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Pod: pods is forbidden: User \"system:kube-scheduler\" cannot list resource \"pods\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Pod"
	E1213 09:33:33.934952       1 reflector.go:205] "Failed to watch" err="failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"storageclasses\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.StorageClass"
	E1213 09:33:33.941462       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User \"system:kube-scheduler\" cannot list resource \"persistentvolumes\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.PersistentVolume"
	E1213 09:33:34.097604       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User \"system:kube-scheduler\" cannot list resource \"persistentvolumeclaims\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.PersistentVolumeClaim"
	E1213 09:33:34.133283       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User \"system:kube-scheduler\" cannot list resource \"replicationcontrollers\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ReplicationController"
	E1213 09:33:34.228471       1 reflector.go:205] "Failed to watch" err="failed to list *v1.DeviceClass: deviceclasses.resource.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"deviceclasses\" in API group \"resource.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.DeviceClass"
	E1213 09:33:34.295819       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Namespace: namespaces is forbidden: User \"system:kube-scheduler\" cannot list resource \"namespaces\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Namespace"
	E1213 09:33:34.316845       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User \"system:kube-scheduler\" cannot list resource \"replicasets\" in API group \"apps\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ReplicaSet"
	E1213 09:33:34.398572       1 reflector.go:205] "Failed to watch" err="failed to list *v1.VolumeAttachment: volumeattachments.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"volumeattachments\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.VolumeAttachment"
	E1213 09:33:34.424661       1 reflector.go:205] "Failed to watch" err="failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csistoragecapacities\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIStorageCapacity"
	E1213 09:33:34.433706       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ResourceSlice: resourceslices.resource.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"resourceslices\" in API group \"resource.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ResourceSlice"
	E1213 09:33:34.466980       1 reflector.go:205] "Failed to watch" err="failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User \"system:kube-scheduler\" cannot list resource \"statefulsets\" in API group \"apps\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.StatefulSet"
	E1213 09:33:34.520494       1 reflector.go:205] "Failed to watch" err="failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csidrivers\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIDriver"
	E1213 09:33:34.524204       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ResourceClaim: resourceclaims.resource.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"resourceclaims\" in API group \"resource.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ResourceClaim"
	E1213 09:33:34.564953       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Node: nodes is forbidden: User \"system:kube-scheduler\" cannot list resource \"nodes\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Node"
	E1213 09:33:34.599575       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ConfigMap: configmaps \"extension-apiserver-authentication\" is forbidden: User \"system:kube-scheduler\" cannot list resource \"configmaps\" in API group \"\" in the namespace \"kube-system\"" logger="UnhandledError" reflector="runtime/asm_amd64.s:1700" type="*v1.ConfigMap"
	E1213 09:33:34.603627       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User \"system:kube-scheduler\" cannot list resource \"poddisruptionbudgets\" in API group \"policy\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.PodDisruptionBudget"
	E1213 09:33:34.662432       1 reflector.go:205] "Failed to watch" err="failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csinodes\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSINode"
	I1213 09:33:36.662642       1 shared_informer.go:356] "Caches are synced" controller="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I1213 09:34:31.600635       1 secure_serving.go:259] Stopped listening on 127.0.0.1:10259
	I1213 09:34:31.600665       1 server.go:263] "[graceful-termination] secure server has stopped listening"
	I1213 09:34:31.600678       1 tlsconfig.go:258] "Shutting down DynamicServingCertificateController"
	I1213 09:34:31.600709       1 configmap_cafile_content.go:226] "Shutting down controller" name="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I1213 09:34:31.601379       1 server.go:265] "[graceful-termination] secure server is exiting"
	E1213 09:34:31.606769       1 run.go:72] "command failed" err="finished without leader elect"
	
	
	==> kubelet <==
	Dec 13 09:36:41 default-k8s-diff-port-018953 kubelet[4247]: I1213 09:36:41.693509    4247 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="c56aa66442294d95cdc779db499f73a6a65c5d8343714f4138a7c6139d20be84"
	Dec 13 09:36:41 default-k8s-diff-port-018953 kubelet[4247]: I1213 09:36:41.694396    4247 kubelet.go:3220] "Creating a mirror pod for static pod" pod="kube-system/kube-apiserver-default-k8s-diff-port-018953"
	Dec 13 09:36:41 default-k8s-diff-port-018953 kubelet[4247]: I1213 09:36:41.746098    4247 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="fd18283ce6f4da3d099da743bad1acd3bbd473f1ee32642ec5b01ea1094721a2"
	Dec 13 09:36:41 default-k8s-diff-port-018953 kubelet[4247]: I1213 09:36:41.863087    4247 kubelet_node_status.go:124] "Node was previously registered" node="default-k8s-diff-port-018953"
	Dec 13 09:36:41 default-k8s-diff-port-018953 kubelet[4247]: I1213 09:36:41.863449    4247 kubelet_node_status.go:78] "Successfully registered node" node="default-k8s-diff-port-018953"
	Dec 13 09:36:41 default-k8s-diff-port-018953 kubelet[4247]: I1213 09:36:41.863559    4247 kuberuntime_manager.go:1828] "Updating runtime config through cri with podcidr" CIDR="10.244.0.0/24"
	Dec 13 09:36:41 default-k8s-diff-port-018953 kubelet[4247]: I1213 09:36:41.865734    4247 kubelet_network.go:47] "Updating Pod CIDR" originalPodCIDR="" newPodCIDR="10.244.0.0/24"
	Dec 13 09:36:41 default-k8s-diff-port-018953 kubelet[4247]: E1213 09:36:41.869479    4247 kubelet.go:3222] "Failed creating a mirror pod" err="pods \"kube-controller-manager-default-k8s-diff-port-018953\" already exists" pod="kube-system/kube-controller-manager-default-k8s-diff-port-018953"
	Dec 13 09:36:41 default-k8s-diff-port-018953 kubelet[4247]: I1213 09:36:41.910589    4247 apiserver.go:52] "Watching apiserver"
	Dec 13 09:36:41 default-k8s-diff-port-018953 kubelet[4247]: E1213 09:36:41.939474    4247 kubelet.go:3222] "Failed creating a mirror pod" err="pods \"etcd-default-k8s-diff-port-018953\" already exists" pod="kube-system/etcd-default-k8s-diff-port-018953"
	Dec 13 09:36:41 default-k8s-diff-port-018953 kubelet[4247]: E1213 09:36:41.939595    4247 kubelet.go:3222] "Failed creating a mirror pod" err="pods \"kube-scheduler-default-k8s-diff-port-018953\" already exists" pod="kube-system/kube-scheduler-default-k8s-diff-port-018953"
	Dec 13 09:36:41 default-k8s-diff-port-018953 kubelet[4247]: E1213 09:36:41.940658    4247 kubelet.go:3222] "Failed creating a mirror pod" err="pods \"kube-apiserver-default-k8s-diff-port-018953\" already exists" pod="kube-system/kube-apiserver-default-k8s-diff-port-018953"
	Dec 13 09:36:41 default-k8s-diff-port-018953 kubelet[4247]: I1213 09:36:41.997570    4247 desired_state_of_world_populator.go:154] "Finished populating initial desired state of world"
	Dec 13 09:36:42 default-k8s-diff-port-018953 kubelet[4247]: I1213 09:36:42.063317    4247 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tmp\" (UniqueName: \"kubernetes.io/host-path/4f04d783-316c-46b9-af1b-892240189979-tmp\") pod \"storage-provisioner\" (UID: \"4f04d783-316c-46b9-af1b-892240189979\") " pod="kube-system/storage-provisioner"
	Dec 13 09:36:42 default-k8s-diff-port-018953 kubelet[4247]: I1213 09:36:42.063494    4247 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/222e259f-13b1-4cc0-b420-fb9f4c871473-xtables-lock\") pod \"kube-proxy-bjk4k\" (UID: \"222e259f-13b1-4cc0-b420-fb9f4c871473\") " pod="kube-system/kube-proxy-bjk4k"
	Dec 13 09:36:42 default-k8s-diff-port-018953 kubelet[4247]: I1213 09:36:42.063533    4247 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/222e259f-13b1-4cc0-b420-fb9f4c871473-lib-modules\") pod \"kube-proxy-bjk4k\" (UID: \"222e259f-13b1-4cc0-b420-fb9f4c871473\") " pod="kube-system/kube-proxy-bjk4k"
	Dec 13 09:36:42 default-k8s-diff-port-018953 kubelet[4247]: I1213 09:36:42.224982    4247 scope.go:117] "RemoveContainer" containerID="aae5aab2e50bdc9408b701e65fed0ce7ed1adc3ead912c1e8d8038523e63d827"
	Dec 13 09:36:42 default-k8s-diff-port-018953 kubelet[4247]: E1213 09:36:42.547075    4247 log.go:32] "PullImage from image service failed" err="rpc error: code = Unknown desc = Docker Image Format v1 and Docker Image manifest version 2, schema 1 support has been removed. Suggest the author of registry.k8s.io/echoserver:1.4 to upgrade the image to the OCI Format or Docker Image manifest v2, schema 2. More information at https://docs.docker.com/go/deprecated-image-specs/" image="registry.k8s.io/echoserver:1.4"
	Dec 13 09:36:42 default-k8s-diff-port-018953 kubelet[4247]: E1213 09:36:42.547146    4247 kuberuntime_image.go:43] "Failed to pull image" err="Docker Image Format v1 and Docker Image manifest version 2, schema 1 support has been removed. Suggest the author of registry.k8s.io/echoserver:1.4 to upgrade the image to the OCI Format or Docker Image manifest v2, schema 2. More information at https://docs.docker.com/go/deprecated-image-specs/" image="registry.k8s.io/echoserver:1.4"
	Dec 13 09:36:42 default-k8s-diff-port-018953 kubelet[4247]: E1213 09:36:42.547406    4247 kuberuntime_manager.go:1449] "Unhandled Error" err="container dashboard-metrics-scraper start failed in pod dashboard-metrics-scraper-6ffb444bf9-trhxx_kubernetes-dashboard(def61142-6626-499a-b752-60ee3640ae87): ErrImagePull: Docker Image Format v1 and Docker Image manifest version 2, schema 1 support has been removed. Suggest the author of registry.k8s.io/echoserver:1.4 to upgrade the image to the OCI Format or Docker Image manifest v2, schema 2. More information at https://docs.docker.com/go/deprecated-image-specs/" logger="UnhandledError"
	Dec 13 09:36:42 default-k8s-diff-port-018953 kubelet[4247]: E1213 09:36:42.547486    4247 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with ErrImagePull: \"Docker Image Format v1 and Docker Image manifest version 2, schema 1 support has been removed. Suggest the author of registry.k8s.io/echoserver:1.4 to upgrade the image to the OCI Format or Docker Image manifest v2, schema 2. More information at https://docs.docker.com/go/deprecated-image-specs/\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-trhxx" podUID="def61142-6626-499a-b752-60ee3640ae87"
	Dec 13 09:36:42 default-k8s-diff-port-018953 kubelet[4247]: E1213 09:36:42.583705    4247 log.go:32] "PullImage from image service failed" err="rpc error: code = Unknown desc = Error response from daemon: Get \"https://fake.domain/v2/\": dial tcp: lookup fake.domain on 192.168.122.1:53: no such host" image="fake.domain/registry.k8s.io/echoserver:1.4"
	Dec 13 09:36:42 default-k8s-diff-port-018953 kubelet[4247]: E1213 09:36:42.583751    4247 kuberuntime_image.go:43] "Failed to pull image" err="Error response from daemon: Get \"https://fake.domain/v2/\": dial tcp: lookup fake.domain on 192.168.122.1:53: no such host" image="fake.domain/registry.k8s.io/echoserver:1.4"
	Dec 13 09:36:42 default-k8s-diff-port-018953 kubelet[4247]: E1213 09:36:42.583835    4247 kuberuntime_manager.go:1449] "Unhandled Error" err="container metrics-server start failed in pod metrics-server-746fcd58dc-lbqfl_kube-system(c4360498-5419-48e2-994c-87efe5f4c20f): ErrImagePull: Error response from daemon: Get \"https://fake.domain/v2/\": dial tcp: lookup fake.domain on 192.168.122.1:53: no such host" logger="UnhandledError"
	Dec 13 09:36:42 default-k8s-diff-port-018953 kubelet[4247]: E1213 09:36:42.583864    4247 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ErrImagePull: \"Error response from daemon: Get \\\"https://fake.domain/v2/\\\": dial tcp: lookup fake.domain on 192.168.122.1:53: no such host\"" pod="kube-system/metrics-server-746fcd58dc-lbqfl" podUID="c4360498-5419-48e2-994c-87efe5f4c20f"
	
	
	==> kubernetes-dashboard [9113189d0c04] <==
	2025/12/13 09:35:53 Starting overwatch
	2025/12/13 09:35:53 Using namespace: kubernetes-dashboard
	2025/12/13 09:35:53 Using in-cluster config to connect to apiserver
	2025/12/13 09:35:53 Using secret token for csrf signing
	2025/12/13 09:35:53 Initializing csrf token from kubernetes-dashboard-csrf secret
	2025/12/13 09:35:53 Empty token. Generating and storing in a secret kubernetes-dashboard-csrf
	2025/12/13 09:35:53 Successful initial request to the apiserver, version: v1.34.2
	2025/12/13 09:35:53 Generating JWE encryption key
	2025/12/13 09:35:53 New synchronizer has been registered: kubernetes-dashboard-key-holder-kubernetes-dashboard. Starting
	2025/12/13 09:35:53 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kubernetes-dashboard
	2025/12/13 09:35:53 Initializing JWE encryption key from synchronized object
	2025/12/13 09:35:53 Creating in-cluster Sidecar client
	2025/12/13 09:35:53 Metric client health check failed: the server is currently unable to handle the request (get services dashboard-metrics-scraper). Retrying in 30 seconds.
	2025/12/13 09:35:53 Serving insecurely on HTTP port: 9090
	2025/12/13 09:36:40 Metric client health check failed: the server is currently unable to handle the request (get services dashboard-metrics-scraper). Retrying in 30 seconds.
	
	
	==> storage-provisioner [a17e1cb8b132] <==
	I1213 09:36:42.715501       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	I1213 09:36:42.736952       1 storage_provisioner.go:141] Storage provisioner initialized, now starting service!
	I1213 09:36:42.737511       1 leaderelection.go:243] attempting to acquire leader lease kube-system/k8s.io-minikube-hostpath...
	W1213 09:36:42.741470       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1213 09:36:46.206605       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	
	
	==> storage-provisioner [aae5aab2e50b] <==
	I1213 09:35:36.713132       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	F1213 09:36:06.734141       1 main.go:39] error getting server version: Get "https://10.96.0.1:443/version?timeout=32s": dial tcp 10.96.0.1:443: i/o timeout
	

                                                
                                                
-- /stdout --
helpers_test.go:263: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p default-k8s-diff-port-018953 -n default-k8s-diff-port-018953
helpers_test.go:270: (dbg) Run:  kubectl --context default-k8s-diff-port-018953 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:281: non-running pods: metrics-server-746fcd58dc-lbqfl dashboard-metrics-scraper-6ffb444bf9-trhxx
helpers_test.go:283: ======> post-mortem[TestStartStop/group/default-k8s-diff-port/serial/Pause]: describe non-running pods <======
helpers_test.go:286: (dbg) Run:  kubectl --context default-k8s-diff-port-018953 describe pod metrics-server-746fcd58dc-lbqfl dashboard-metrics-scraper-6ffb444bf9-trhxx
helpers_test.go:286: (dbg) Non-zero exit: kubectl --context default-k8s-diff-port-018953 describe pod metrics-server-746fcd58dc-lbqfl dashboard-metrics-scraper-6ffb444bf9-trhxx: exit status 1 (103.827246ms)

                                                
                                                
** stderr ** 
	Error from server (NotFound): pods "metrics-server-746fcd58dc-lbqfl" not found
	Error from server (NotFound): pods "dashboard-metrics-scraper-6ffb444bf9-trhxx" not found

                                                
                                                
** /stderr **
helpers_test.go:288: kubectl --context default-k8s-diff-port-018953 describe pod metrics-server-746fcd58dc-lbqfl dashboard-metrics-scraper-6ffb444bf9-trhxx: exit status 1
--- FAIL: TestStartStop/group/default-k8s-diff-port/serial/Pause (41.21s)

                                                
                                    

Test pass (405/452)

Order passed test Duration
3 TestDownloadOnly/v1.28.0/json-events 7.76
4 TestDownloadOnly/v1.28.0/preload-exists 0
8 TestDownloadOnly/v1.28.0/LogsDuration 0.08
9 TestDownloadOnly/v1.28.0/DeleteAll 0.17
10 TestDownloadOnly/v1.28.0/DeleteAlwaysSucceeds 0.15
12 TestDownloadOnly/v1.34.2/json-events 2.78
13 TestDownloadOnly/v1.34.2/preload-exists 0
17 TestDownloadOnly/v1.34.2/LogsDuration 0.08
18 TestDownloadOnly/v1.34.2/DeleteAll 0.17
19 TestDownloadOnly/v1.34.2/DeleteAlwaysSucceeds 0.16
21 TestDownloadOnly/v1.35.0-beta.0/json-events 2.86
22 TestDownloadOnly/v1.35.0-beta.0/preload-exists 0
26 TestDownloadOnly/v1.35.0-beta.0/LogsDuration 0.08
27 TestDownloadOnly/v1.35.0-beta.0/DeleteAll 0.17
28 TestDownloadOnly/v1.35.0-beta.0/DeleteAlwaysSucceeds 0.16
30 TestBinaryMirror 0.66
31 TestOffline 110.79
34 TestAddons/PreSetup/EnablingAddonOnNonExistingCluster 0.07
35 TestAddons/PreSetup/DisablingAddonOnNonExistingCluster 0.07
36 TestAddons/Setup 211.11
38 TestAddons/serial/Volcano 45.76
40 TestAddons/serial/GCPAuth/Namespaces 0.13
41 TestAddons/serial/GCPAuth/FakeCredentials 9.62
44 TestAddons/parallel/Registry 17.6
45 TestAddons/parallel/RegistryCreds 0.58
46 TestAddons/parallel/Ingress 20.71
47 TestAddons/parallel/InspektorGadget 11.01
48 TestAddons/parallel/MetricsServer 6.76
50 TestAddons/parallel/CSI 46.62
51 TestAddons/parallel/Headlamp 24.17
52 TestAddons/parallel/CloudSpanner 6.51
53 TestAddons/parallel/LocalPath 59.01
54 TestAddons/parallel/NvidiaDevicePlugin 6.66
55 TestAddons/parallel/Yakd 11.89
57 TestAddons/StoppedEnableDisable 13.2
58 TestCertOptions 64.67
59 TestCertExpiration 311.93
60 TestDockerFlags 70.72
61 TestForceSystemdFlag 72.95
62 TestForceSystemdEnv 50.5
67 TestErrorSpam/setup 41.55
68 TestErrorSpam/start 0.36
69 TestErrorSpam/status 0.71
70 TestErrorSpam/pause 1.36
71 TestErrorSpam/unpause 1.67
72 TestErrorSpam/stop 6.6
75 TestFunctional/serial/CopySyncFile 0
76 TestFunctional/serial/StartWithProxy 81.2
77 TestFunctional/serial/AuditLog 0
78 TestFunctional/serial/SoftStart 55.58
79 TestFunctional/serial/KubeContext 0.05
80 TestFunctional/serial/KubectlGetPods 0.08
83 TestFunctional/serial/CacheCmd/cache/add_remote 2.38
84 TestFunctional/serial/CacheCmd/cache/add_local 1.33
85 TestFunctional/serial/CacheCmd/cache/CacheDelete 0.06
86 TestFunctional/serial/CacheCmd/cache/list 0.07
87 TestFunctional/serial/CacheCmd/cache/verify_cache_inside_node 0.18
88 TestFunctional/serial/CacheCmd/cache/cache_reload 1.09
89 TestFunctional/serial/CacheCmd/cache/delete 0.12
90 TestFunctional/serial/MinikubeKubectlCmd 0.13
91 TestFunctional/serial/MinikubeKubectlCmdDirectly 0.12
92 TestFunctional/serial/ExtraConfig 56.92
93 TestFunctional/serial/ComponentHealth 0.07
94 TestFunctional/serial/LogsCmd 1.07
95 TestFunctional/serial/LogsFileCmd 1.12
96 TestFunctional/serial/InvalidService 4.12
98 TestFunctional/parallel/ConfigCmd 0.45
99 TestFunctional/parallel/DashboardCmd 33.02
100 TestFunctional/parallel/DryRun 0.27
101 TestFunctional/parallel/InternationalLanguage 0.14
102 TestFunctional/parallel/StatusCmd 0.9
106 TestFunctional/parallel/ServiceCmdConnect 8.66
107 TestFunctional/parallel/AddonsCmd 0.15
108 TestFunctional/parallel/PersistentVolumeClaim 48.75
110 TestFunctional/parallel/SSHCmd 0.31
111 TestFunctional/parallel/CpCmd 1.34
112 TestFunctional/parallel/MySQL 47.59
113 TestFunctional/parallel/FileSync 0.18
114 TestFunctional/parallel/CertSync 1.25
118 TestFunctional/parallel/NodeLabels 0.1
120 TestFunctional/parallel/NonActiveRuntimeDisabled 0.21
122 TestFunctional/parallel/License 0.27
123 TestFunctional/parallel/ServiceCmd/DeployApp 10.25
124 TestFunctional/parallel/DockerEnv/bash 0.94
125 TestFunctional/parallel/UpdateContextCmd/no_changes 0.08
126 TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster 0.07
127 TestFunctional/parallel/UpdateContextCmd/no_clusters 0.07
128 TestFunctional/parallel/Version/short 0.06
129 TestFunctional/parallel/Version/components 0.46
139 TestFunctional/parallel/ProfileCmd/profile_not_create 0.36
140 TestFunctional/parallel/ProfileCmd/profile_list 0.37
141 TestFunctional/parallel/ProfileCmd/profile_json_output 0.38
142 TestFunctional/parallel/MountCmd/any-port 28.96
143 TestFunctional/parallel/ServiceCmd/List 0.28
144 TestFunctional/parallel/ServiceCmd/JSONOutput 0.51
145 TestFunctional/parallel/ServiceCmd/HTTPS 0.31
146 TestFunctional/parallel/ServiceCmd/Format 0.42
147 TestFunctional/parallel/ServiceCmd/URL 0.35
148 TestFunctional/parallel/MountCmd/specific-port 1.52
149 TestFunctional/parallel/MountCmd/VerifyCleanup 1.24
150 TestFunctional/parallel/ImageCommands/ImageListShort 0.19
151 TestFunctional/parallel/ImageCommands/ImageListTable 0.19
152 TestFunctional/parallel/ImageCommands/ImageListJson 0.19
153 TestFunctional/parallel/ImageCommands/ImageListYaml 0.19
154 TestFunctional/parallel/ImageCommands/ImageBuild 3.46
155 TestFunctional/parallel/ImageCommands/Setup 1.62
156 TestFunctional/parallel/ImageCommands/ImageLoadDaemon 1.42
157 TestFunctional/parallel/ImageCommands/ImageReloadDaemon 1.08
158 TestFunctional/parallel/ImageCommands/ImageTagAndLoadDaemon 1.63
159 TestFunctional/parallel/ImageCommands/ImageSaveToFile 0.71
160 TestFunctional/parallel/ImageCommands/ImageRemove 0.53
161 TestFunctional/parallel/ImageCommands/ImageLoadFromFile 1.1
162 TestFunctional/parallel/ImageCommands/ImageSaveDaemon 0.52
163 TestFunctional/delete_echo-server_images 0.04
164 TestFunctional/delete_my-image_image 0.02
165 TestFunctional/delete_minikube_cached_images 0.02
169 TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/CopySyncFile 0
170 TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/StartWithProxy 86.63
171 TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/AuditLog 0
172 TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/SoftStart 57.95
173 TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/KubeContext 0.05
174 TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/KubectlGetPods 0.09
177 TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/CacheCmd/cache/add_remote 2.22
178 TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/CacheCmd/cache/add_local 1.29
179 TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/CacheCmd/cache/CacheDelete 0.06
180 TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/CacheCmd/cache/list 0.06
181 TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/CacheCmd/cache/verify_cache_inside_node 0.18
182 TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/CacheCmd/cache/cache_reload 1.08
183 TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/CacheCmd/cache/delete 0.13
184 TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/MinikubeKubectlCmd 0.13
185 TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/MinikubeKubectlCmdDirectly 0.12
186 TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/ExtraConfig 55.51
187 TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/ComponentHealth 0.07
188 TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/LogsCmd 1.09
189 TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/LogsFileCmd 1.11
190 TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/InvalidService 4.65
192 TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ConfigCmd 0.44
193 TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/DashboardCmd 47.24
194 TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/DryRun 0.27
195 TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/InternationalLanguage 0.15
196 TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/StatusCmd 0.88
200 TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ServiceCmdConnect 9.58
201 TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/AddonsCmd 0.18
202 TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim 31.22
204 TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/SSHCmd 0.37
205 TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/CpCmd 1.26
206 TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/MySQL 45.47
207 TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/FileSync 0.23
208 TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/CertSync 1.19
212 TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/NodeLabels 0.08
214 TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/NonActiveRuntimeDisabled 0.24
216 TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/License 0.38
226 TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ServiceCmd/DeployApp 9.22
227 TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ProfileCmd/profile_not_create 0.34
228 TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ProfileCmd/profile_list 0.33
229 TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ProfileCmd/profile_json_output 0.33
230 TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/MountCmd/any-port 7.22
231 TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/MountCmd/specific-port 1.42
232 TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ServiceCmd/List 0.32
233 TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ServiceCmd/JSONOutput 0.29
234 TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ServiceCmd/HTTPS 0.31
235 TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ServiceCmd/Format 0.36
236 TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/MountCmd/VerifyCleanup 1.53
237 TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ServiceCmd/URL 0.53
238 TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/Version/short 0.15
239 TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/Version/components 0.62
240 TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ImageCommands/ImageListShort 0.25
241 TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ImageCommands/ImageListTable 0.22
242 TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ImageCommands/ImageListJson 0.2
243 TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ImageCommands/ImageListYaml 0.22
244 TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ImageCommands/ImageBuild 4.32
245 TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ImageCommands/Setup 0.7
246 TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ImageCommands/ImageLoadDaemon 1.7
247 TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ImageCommands/ImageReloadDaemon 0.83
248 TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ImageCommands/ImageTagAndLoadDaemon 1.59
249 TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ImageCommands/ImageSaveToFile 0.67
250 TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ImageCommands/ImageRemove 0.66
251 TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ImageCommands/ImageLoadFromFile 0.77
252 TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ImageCommands/ImageSaveDaemon 0.5
253 TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/DockerEnv/bash 0.79
254 TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/UpdateContextCmd/no_changes 0.07
255 TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/UpdateContextCmd/no_minikube_cluster 0.07
256 TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/UpdateContextCmd/no_clusters 0.08
257 TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/delete_echo-server_images 0.04
258 TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/delete_my-image_image 0.02
259 TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/delete_minikube_cached_images 0.02
260 TestGvisorAddon 183.43
263 TestMultiControlPlane/serial/StartCluster 267.75
264 TestMultiControlPlane/serial/DeployApp 7.31
265 TestMultiControlPlane/serial/PingHostFromPods 1.55
266 TestMultiControlPlane/serial/AddWorkerNode 52.08
267 TestMultiControlPlane/serial/NodeLabels 0.07
268 TestMultiControlPlane/serial/HAppyAfterClusterStart 0.76
269 TestMultiControlPlane/serial/CopyFile 11.54
270 TestMultiControlPlane/serial/StopSecondaryNode 15.47
271 TestMultiControlPlane/serial/DegradedAfterControlPlaneNodeStop 0.57
272 TestMultiControlPlane/serial/RestartSecondaryNode 31.88
273 TestMultiControlPlane/serial/HAppyAfterSecondaryNodeRestart 0.86
274 TestMultiControlPlane/serial/RestartClusterKeepsNodes 166.67
275 TestMultiControlPlane/serial/DeleteSecondaryNode 7.56
276 TestMultiControlPlane/serial/DegradedAfterSecondaryNodeDelete 0.56
277 TestMultiControlPlane/serial/StopCluster 40.23
278 TestMultiControlPlane/serial/RestartCluster 137.73
279 TestMultiControlPlane/serial/DegradedAfterClusterRestart 0.56
280 TestMultiControlPlane/serial/AddSecondaryNode 93.41
281 TestMultiControlPlane/serial/HAppyAfterSecondaryNodeAdd 0.74
284 TestImageBuild/serial/Setup 45.75
285 TestImageBuild/serial/NormalBuild 1.69
286 TestImageBuild/serial/BuildWithBuildArg 1.02
287 TestImageBuild/serial/BuildWithDockerIgnore 0.73
288 TestImageBuild/serial/BuildWithSpecifiedDockerfile 1.18
293 TestJSONOutput/start/Command 91.7
294 TestJSONOutput/start/Audit 0
296 TestJSONOutput/start/parallel/DistinctCurrentSteps 0
297 TestJSONOutput/start/parallel/IncreasingCurrentSteps 0
299 TestJSONOutput/pause/Command 0.65
300 TestJSONOutput/pause/Audit 0
302 TestJSONOutput/pause/parallel/DistinctCurrentSteps 0
303 TestJSONOutput/pause/parallel/IncreasingCurrentSteps 0
305 TestJSONOutput/unpause/Command 0.63
306 TestJSONOutput/unpause/Audit 0
308 TestJSONOutput/unpause/parallel/DistinctCurrentSteps 0
309 TestJSONOutput/unpause/parallel/IncreasingCurrentSteps 0
311 TestJSONOutput/stop/Command 14.47
312 TestJSONOutput/stop/Audit 0
314 TestJSONOutput/stop/parallel/DistinctCurrentSteps 0
315 TestJSONOutput/stop/parallel/IncreasingCurrentSteps 0
316 TestErrorJSONOutput 0.25
321 TestMainNoArgs 0.06
322 TestMinikubeProfile 96.67
325 TestMountStart/serial/StartWithMountFirst 24.57
326 TestMountStart/serial/VerifyMountFirst 0.33
327 TestMountStart/serial/StartWithMountSecond 25.18
328 TestMountStart/serial/VerifyMountSecond 0.32
329 TestMountStart/serial/DeleteFirst 0.71
330 TestMountStart/serial/VerifyMountPostDelete 0.33
331 TestMountStart/serial/Stop 1.36
332 TestMountStart/serial/RestartStopped 23.09
333 TestMountStart/serial/VerifyMountPostStop 0.34
336 TestMultiNode/serial/FreshStart2Nodes 121.65
337 TestMultiNode/serial/DeployApp2Nodes 5.91
338 TestMultiNode/serial/PingHostFrom2Pods 0.98
339 TestMultiNode/serial/AddNode 49.41
340 TestMultiNode/serial/MultiNodeLabels 0.07
341 TestMultiNode/serial/ProfileList 0.48
342 TestMultiNode/serial/CopyFile 6.25
343 TestMultiNode/serial/StopNode 2.47
344 TestMultiNode/serial/StartAfterStop 41.98
345 TestMultiNode/serial/RestartKeepsNodes 198.85
346 TestMultiNode/serial/DeleteNode 2.38
347 TestMultiNode/serial/StopMultiNode 27.18
348 TestMultiNode/serial/RestartMultiNode 130.65
349 TestMultiNode/serial/ValidateNameConflict 46.45
354 TestPreload 158.9
356 TestScheduledStopUnix 116.63
357 TestSkaffold 137.53
360 TestRunningBinaryUpgrade 399.75
362 TestKubernetesUpgrade 268.28
372 TestPause/serial/Start 89.01
383 TestNoKubernetes/serial/StartNoK8sWithVersion 0.1
386 TestNoKubernetes/serial/StartWithK8s 92.39
387 TestPause/serial/SecondStartNoReconfiguration 63.01
388 TestNoKubernetes/serial/StartWithStopK8s 16.15
389 TestNoKubernetes/serial/Start 24.75
390 TestStoppedBinaryUpgrade/Setup 0.68
391 TestStoppedBinaryUpgrade/Upgrade 164.69
392 TestNoKubernetes/serial/VerifyNok8sNoK8sDownloads 0
393 TestNoKubernetes/serial/VerifyK8sNotRunning 0.19
394 TestNoKubernetes/serial/ProfileList 8.29
395 TestNoKubernetes/serial/Stop 1.53
396 TestNoKubernetes/serial/StartNoArgs 53.49
397 TestPause/serial/Pause 0.75
398 TestPause/serial/VerifyStatus 0.26
399 TestPause/serial/Unpause 0.76
400 TestPause/serial/PauseAgain 1.17
401 TestPause/serial/DeletePaused 0.91
402 TestPause/serial/VerifyDeletedResources 0.32
403 TestNoKubernetes/serial/VerifyK8sNotRunningSecond 0.2
404 TestStoppedBinaryUpgrade/MinikubeLogs 1.35
405 TestISOImage/Setup 61.65
407 TestISOImage/Binaries/crictl 0.17
408 TestISOImage/Binaries/curl 0.18
409 TestISOImage/Binaries/docker 0.18
410 TestISOImage/Binaries/git 0.18
411 TestISOImage/Binaries/iptables 0.17
412 TestISOImage/Binaries/podman 0.18
413 TestISOImage/Binaries/rsync 0.17
414 TestISOImage/Binaries/socat 0.17
415 TestISOImage/Binaries/wget 0.17
416 TestISOImage/Binaries/VBoxControl 0.17
417 TestISOImage/Binaries/VBoxService 0.17
419 TestStartStop/group/old-k8s-version/serial/FirstStart 105.1
421 TestStartStop/group/no-preload/serial/FirstStart 101.03
423 TestStartStop/group/embed-certs/serial/FirstStart 92.87
424 TestStartStop/group/old-k8s-version/serial/DeployApp 9.46
425 TestStartStop/group/old-k8s-version/serial/EnableAddonWhileActive 1.31
426 TestStartStop/group/old-k8s-version/serial/Stop 13.93
427 TestStartStop/group/no-preload/serial/DeployApp 9.48
428 TestStartStop/group/old-k8s-version/serial/EnableAddonAfterStop 0.39
429 TestStartStop/group/old-k8s-version/serial/SecondStart 50.48
430 TestStartStop/group/no-preload/serial/EnableAddonWhileActive 1.28
431 TestStartStop/group/no-preload/serial/Stop 14.54
432 TestStartStop/group/no-preload/serial/EnableAddonAfterStop 0.18
433 TestStartStop/group/no-preload/serial/SecondStart 55.99
434 TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop 17.01
436 TestStartStop/group/newest-cni/serial/FirstStart 57.91
437 TestStartStop/group/embed-certs/serial/DeployApp 10.45
438 TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop 5.09
439 TestStartStop/group/old-k8s-version/serial/VerifyKubernetesImages 0.45
440 TestStartStop/group/old-k8s-version/serial/Pause 3.49
441 TestStartStop/group/no-preload/serial/UserAppExistsAfterStop 13.01
442 TestStartStop/group/embed-certs/serial/EnableAddonWhileActive 1.44
443 TestStartStop/group/embed-certs/serial/Stop 13.98
445 TestStartStop/group/default-k8s-diff-port/serial/FirstStart 89.23
446 TestStartStop/group/no-preload/serial/AddonExistsAfterStop 5.1
447 TestStartStop/group/embed-certs/serial/EnableAddonAfterStop 0.19
448 TestStartStop/group/embed-certs/serial/SecondStart 56.42
449 TestStartStop/group/no-preload/serial/VerifyKubernetesImages 0.25
450 TestStartStop/group/no-preload/serial/Pause 3.24
451 TestNetworkPlugins/group/auto/Start 114.33
452 TestStartStop/group/newest-cni/serial/DeployApp 0
453 TestStartStop/group/newest-cni/serial/EnableAddonWhileActive 1.21
454 TestStartStop/group/newest-cni/serial/Stop 13.71
455 TestStartStop/group/newest-cni/serial/EnableAddonAfterStop 0.2
456 TestStartStop/group/newest-cni/serial/SecondStart 57.98
457 TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop 12.01
458 TestStartStop/group/embed-certs/serial/AddonExistsAfterStop 5.11
459 TestStartStop/group/embed-certs/serial/VerifyKubernetesImages 0.24
461 TestStartStop/group/default-k8s-diff-port/serial/DeployApp 10.44
462 TestStartStop/group/default-k8s-diff-port/serial/EnableAddonWhileActive 1.24
463 TestStartStop/group/default-k8s-diff-port/serial/Stop 13.1
464 TestStartStop/group/newest-cni/serial/UserAppExistsAfterStop 0
465 TestStartStop/group/newest-cni/serial/AddonExistsAfterStop 0
466 TestStartStop/group/newest-cni/serial/VerifyKubernetesImages 0.21
467 TestStartStop/group/newest-cni/serial/Pause 2.74
468 TestNetworkPlugins/group/kindnet/Start 79.7
469 TestStartStop/group/default-k8s-diff-port/serial/EnableAddonAfterStop 0.15
470 TestStartStop/group/default-k8s-diff-port/serial/SecondStart 63.63
471 TestNetworkPlugins/group/calico/Start 123.68
472 TestNetworkPlugins/group/auto/KubeletFlags 0.22
473 TestNetworkPlugins/group/auto/NetCatPod 13.33
474 TestNetworkPlugins/group/auto/DNS 0.26
475 TestNetworkPlugins/group/auto/Localhost 0.2
476 TestNetworkPlugins/group/auto/HairPin 0.18
477 TestNetworkPlugins/group/custom-flannel/Start 76.52
478 TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop 13.01
479 TestNetworkPlugins/group/kindnet/ControllerPod 6.01
480 TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop 5.11
481 TestNetworkPlugins/group/kindnet/KubeletFlags 0.21
482 TestNetworkPlugins/group/kindnet/NetCatPod 12.3
483 TestStartStop/group/default-k8s-diff-port/serial/VerifyKubernetesImages 0.23
485 TestNetworkPlugins/group/kindnet/DNS 0.23
486 TestNetworkPlugins/group/kindnet/Localhost 0.16
487 TestNetworkPlugins/group/kindnet/HairPin 0.16
488 TestNetworkPlugins/group/false/Start 94.24
489 TestNetworkPlugins/group/enable-default-cni/Start 98.81
490 TestNetworkPlugins/group/custom-flannel/KubeletFlags 0.18
491 TestNetworkPlugins/group/custom-flannel/NetCatPod 11.28
492 TestNetworkPlugins/group/calico/ControllerPod 6.01
493 TestNetworkPlugins/group/custom-flannel/DNS 0.21
494 TestNetworkPlugins/group/custom-flannel/Localhost 0.18
495 TestNetworkPlugins/group/custom-flannel/HairPin 0.17
496 TestNetworkPlugins/group/calico/KubeletFlags 0.22
497 TestNetworkPlugins/group/calico/NetCatPod 13.37
498 TestNetworkPlugins/group/flannel/Start 73.97
499 TestNetworkPlugins/group/calico/DNS 0.4
500 TestNetworkPlugins/group/calico/Localhost 0.19
501 TestNetworkPlugins/group/calico/HairPin 0.23
502 TestNetworkPlugins/group/bridge/Start 102.93
503 TestNetworkPlugins/group/false/KubeletFlags 0.2
504 TestNetworkPlugins/group/false/NetCatPod 13.31
505 TestNetworkPlugins/group/false/DNS 0.21
506 TestNetworkPlugins/group/false/Localhost 0.15
507 TestNetworkPlugins/group/false/HairPin 0.15
508 TestNetworkPlugins/group/enable-default-cni/KubeletFlags 0.23
509 TestNetworkPlugins/group/enable-default-cni/NetCatPod 12.33
510 TestNetworkPlugins/group/flannel/ControllerPod 6.01
511 TestNetworkPlugins/group/kubenet/Start 89.17
512 TestNetworkPlugins/group/flannel/KubeletFlags 0.23
513 TestNetworkPlugins/group/flannel/NetCatPod 12.31
514 TestNetworkPlugins/group/enable-default-cni/DNS 0.26
515 TestNetworkPlugins/group/enable-default-cni/Localhost 0.23
516 TestNetworkPlugins/group/enable-default-cni/HairPin 0.31
517 TestNetworkPlugins/group/flannel/DNS 0.22
518 TestNetworkPlugins/group/flannel/Localhost 0.17
519 TestNetworkPlugins/group/flannel/HairPin 0.17
521 TestISOImage/PersistentMounts//data 0.2
522 TestISOImage/PersistentMounts//var/lib/docker 0.19
523 TestISOImage/PersistentMounts//var/lib/cni 0.2
524 TestISOImage/PersistentMounts//var/lib/kubelet 0.19
525 TestISOImage/PersistentMounts//var/lib/minikube 0.19
526 TestISOImage/PersistentMounts//var/lib/toolbox 0.19
527 TestISOImage/PersistentMounts//var/lib/boot2docker 0.21
528 TestISOImage/VersionJSON 0.19
529 TestISOImage/eBPFSupport 0.19
530 TestNetworkPlugins/group/bridge/KubeletFlags 0.2
531 TestNetworkPlugins/group/bridge/NetCatPod 11.26
532 TestNetworkPlugins/group/bridge/DNS 0.23
533 TestNetworkPlugins/group/bridge/Localhost 0.16
534 TestNetworkPlugins/group/bridge/HairPin 0.15
535 TestNetworkPlugins/group/kubenet/KubeletFlags 0.19
536 TestNetworkPlugins/group/kubenet/NetCatPod 11.24
537 TestNetworkPlugins/group/kubenet/DNS 0.17
538 TestNetworkPlugins/group/kubenet/Localhost 0.15
539 TestNetworkPlugins/group/kubenet/HairPin 0.15
x
+
TestDownloadOnly/v1.28.0/json-events (7.76s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.28.0/json-events
aaa_download_only_test.go:80: (dbg) Run:  out/minikube-linux-amd64 start -o=json --download-only -p download-only-432916 --force --alsologtostderr --kubernetes-version=v1.28.0 --container-runtime=docker --driver=kvm2 
aaa_download_only_test.go:80: (dbg) Done: out/minikube-linux-amd64 start -o=json --download-only -p download-only-432916 --force --alsologtostderr --kubernetes-version=v1.28.0 --container-runtime=docker --driver=kvm2 : (7.7549022s)
--- PASS: TestDownloadOnly/v1.28.0/json-events (7.76s)

                                                
                                    
x
+
TestDownloadOnly/v1.28.0/preload-exists (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.28.0/preload-exists
I1213 08:28:57.745769   13307 preload.go:188] Checking if preload exists for k8s version v1.28.0 and runtime docker
I1213 08:28:57.745857   13307 preload.go:203] Found local preload: /home/jenkins/minikube-integration/22128-9390/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.28.0-docker-overlay2-amd64.tar.lz4
--- PASS: TestDownloadOnly/v1.28.0/preload-exists (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.28.0/LogsDuration (0.08s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.28.0/LogsDuration
aaa_download_only_test.go:183: (dbg) Run:  out/minikube-linux-amd64 logs -p download-only-432916
aaa_download_only_test.go:183: (dbg) Non-zero exit: out/minikube-linux-amd64 logs -p download-only-432916: exit status 85 (80.362957ms)

                                                
                                                
-- stdout --
	
	==> Audit <==
	┌─────────┬─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┬──────────────────────┬─────────┬─────────┬─────────────────────┬──────────┐
	│ COMMAND │                                                                      ARGS                                                                       │       PROFILE        │  USER   │ VERSION │     START TIME      │ END TIME │
	├─────────┼─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┼──────────────────────┼─────────┼─────────┼─────────────────────┼──────────┤
	│ start   │ -o=json --download-only -p download-only-432916 --force --alsologtostderr --kubernetes-version=v1.28.0 --container-runtime=docker --driver=kvm2 │ download-only-432916 │ jenkins │ v1.37.0 │ 13 Dec 25 08:28 UTC │          │
	└─────────┴─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┴──────────────────────┴─────────┴─────────┴─────────────────────┴──────────┘
	
	
	==> Last Start <==
	Log file created at: 2025/12/13 08:28:50
	Running on machine: ubuntu-20-agent-3
	Binary: Built with gc go1.25.5 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1213 08:28:50.045598   13319 out.go:360] Setting OutFile to fd 1 ...
	I1213 08:28:50.045829   13319 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1213 08:28:50.045837   13319 out.go:374] Setting ErrFile to fd 2...
	I1213 08:28:50.045841   13319 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1213 08:28:50.046047   13319 root.go:338] Updating PATH: /home/jenkins/minikube-integration/22128-9390/.minikube/bin
	W1213 08:28:50.046165   13319 root.go:314] Error reading config file at /home/jenkins/minikube-integration/22128-9390/.minikube/config/config.json: open /home/jenkins/minikube-integration/22128-9390/.minikube/config/config.json: no such file or directory
	I1213 08:28:50.046719   13319 out.go:368] Setting JSON to true
	I1213 08:28:50.047651   13319 start.go:133] hostinfo: {"hostname":"ubuntu-20-agent-3","uptime":680,"bootTime":1765613850,"procs":203,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"22.04","kernelVersion":"6.8.0-1045-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I1213 08:28:50.047711   13319 start.go:143] virtualization: kvm guest
	I1213 08:28:50.052804   13319 out.go:99] [download-only-432916] minikube v1.37.0 on Ubuntu 22.04 (kvm/amd64)
	W1213 08:28:50.053042   13319 preload.go:354] Failed to list preload files: open /home/jenkins/minikube-integration/22128-9390/.minikube/cache/preloaded-tarball: no such file or directory
	I1213 08:28:50.053101   13319 notify.go:221] Checking for updates...
	I1213 08:28:50.054652   13319 out.go:171] MINIKUBE_LOCATION=22128
	I1213 08:28:50.056386   13319 out.go:171] MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1213 08:28:50.057830   13319 out.go:171] KUBECONFIG=/home/jenkins/minikube-integration/22128-9390/kubeconfig
	I1213 08:28:50.059252   13319 out.go:171] MINIKUBE_HOME=/home/jenkins/minikube-integration/22128-9390/.minikube
	I1213 08:28:50.060722   13319 out.go:171] MINIKUBE_BIN=out/minikube-linux-amd64
	W1213 08:28:50.063582   13319 out.go:336] minikube skips various validations when --force is supplied; this may lead to unexpected behavior
	I1213 08:28:50.063868   13319 driver.go:422] Setting default libvirt URI to qemu:///system
	I1213 08:28:50.618821   13319 out.go:99] Using the kvm2 driver based on user configuration
	I1213 08:28:50.618858   13319 start.go:309] selected driver: kvm2
	I1213 08:28:50.618865   13319 start.go:927] validating driver "kvm2" against <nil>
	I1213 08:28:50.619227   13319 start_flags.go:327] no existing cluster config was found, will generate one from the flags 
	I1213 08:28:50.619786   13319 start_flags.go:410] Using suggested 6144MB memory alloc based on sys=32093MB, container=0MB
	I1213 08:28:50.619989   13319 start_flags.go:974] Wait components to verify : map[apiserver:true system_pods:true]
	I1213 08:28:50.620017   13319 cni.go:84] Creating CNI manager for ""
	I1213 08:28:50.620081   13319 cni.go:158] "kvm2" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I1213 08:28:50.620094   13319 start_flags.go:336] Found "bridge CNI" CNI - setting NetworkPlugin=cni
	I1213 08:28:50.620145   13319 start.go:353] cluster config:
	{Name:download-only-432916 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765275396-22083@sha256:ffa93f7bad1d2c0a7acfa6e97f1eec0e4955680d08c3904e49db297a10f7f89f Memory:6144 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.28.0 ClusterName:download-only-432916 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local Cont
ainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.28.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1213 08:28:50.620387   13319 iso.go:125] acquiring lock: {Name:mka70bc7358d71723b0212976cce8aaa1cb0bc58 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1213 08:28:50.622232   13319 out.go:99] Downloading VM boot image ...
	I1213 08:28:50.622282   13319 download.go:108] Downloading: https://storage.googleapis.com/minikube-builds/iso/22101/minikube-v1.37.0-1765481609-22101-amd64.iso?checksum=file:https://storage.googleapis.com/minikube-builds/iso/22101/minikube-v1.37.0-1765481609-22101-amd64.iso.sha256 -> /home/jenkins/minikube-integration/22128-9390/.minikube/cache/iso/amd64/minikube-v1.37.0-1765481609-22101-amd64.iso
	I1213 08:28:53.920825   13319 out.go:99] Starting "download-only-432916" primary control-plane node in "download-only-432916" cluster
	I1213 08:28:53.920873   13319 preload.go:188] Checking if preload exists for k8s version v1.28.0 and runtime docker
	I1213 08:28:53.937517   13319 preload.go:148] Found remote preload: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.28.0/preloaded-images-k8s-v18-v1.28.0-docker-overlay2-amd64.tar.lz4
	I1213 08:28:53.937550   13319 cache.go:65] Caching tarball of preloaded images
	I1213 08:28:53.937726   13319 preload.go:188] Checking if preload exists for k8s version v1.28.0 and runtime docker
	I1213 08:28:53.939723   13319 out.go:99] Downloading Kubernetes v1.28.0 preload ...
	I1213 08:28:53.939741   13319 preload.go:318] getting checksum for preloaded-images-k8s-v18-v1.28.0-docker-overlay2-amd64.tar.lz4 from gcs api...
	I1213 08:28:53.963699   13319 preload.go:295] Got checksum from GCS API "8a955be835827bc584bcce0658a7fcc9"
	I1213 08:28:53.963857   13319 download.go:108] Downloading: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.28.0/preloaded-images-k8s-v18-v1.28.0-docker-overlay2-amd64.tar.lz4?checksum=md5:8a955be835827bc584bcce0658a7fcc9 -> /home/jenkins/minikube-integration/22128-9390/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.28.0-docker-overlay2-amd64.tar.lz4
	
	
	* The control-plane node download-only-432916 host does not exist
	  To start a cluster, run: "minikube start -p download-only-432916"

                                                
                                                
-- /stdout --
aaa_download_only_test.go:184: minikube logs failed with error: exit status 85
--- PASS: TestDownloadOnly/v1.28.0/LogsDuration (0.08s)

                                                
                                    
x
+
TestDownloadOnly/v1.28.0/DeleteAll (0.17s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.28.0/DeleteAll
aaa_download_only_test.go:196: (dbg) Run:  out/minikube-linux-amd64 delete --all
--- PASS: TestDownloadOnly/v1.28.0/DeleteAll (0.17s)

                                                
                                    
x
+
TestDownloadOnly/v1.28.0/DeleteAlwaysSucceeds (0.15s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.28.0/DeleteAlwaysSucceeds
aaa_download_only_test.go:207: (dbg) Run:  out/minikube-linux-amd64 delete -p download-only-432916
--- PASS: TestDownloadOnly/v1.28.0/DeleteAlwaysSucceeds (0.15s)

                                                
                                    
x
+
TestDownloadOnly/v1.34.2/json-events (2.78s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.34.2/json-events
aaa_download_only_test.go:80: (dbg) Run:  out/minikube-linux-amd64 start -o=json --download-only -p download-only-777569 --force --alsologtostderr --kubernetes-version=v1.34.2 --container-runtime=docker --driver=kvm2 
aaa_download_only_test.go:80: (dbg) Done: out/minikube-linux-amd64 start -o=json --download-only -p download-only-777569 --force --alsologtostderr --kubernetes-version=v1.34.2 --container-runtime=docker --driver=kvm2 : (2.78427405s)
--- PASS: TestDownloadOnly/v1.34.2/json-events (2.78s)

                                                
                                    
x
+
TestDownloadOnly/v1.34.2/preload-exists (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.34.2/preload-exists
I1213 08:29:00.926428   13307 preload.go:188] Checking if preload exists for k8s version v1.34.2 and runtime docker
I1213 08:29:00.926471   13307 preload.go:203] Found local preload: /home/jenkins/minikube-integration/22128-9390/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.2-docker-overlay2-amd64.tar.lz4
--- PASS: TestDownloadOnly/v1.34.2/preload-exists (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.34.2/LogsDuration (0.08s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.34.2/LogsDuration
aaa_download_only_test.go:183: (dbg) Run:  out/minikube-linux-amd64 logs -p download-only-777569
aaa_download_only_test.go:183: (dbg) Non-zero exit: out/minikube-linux-amd64 logs -p download-only-777569: exit status 85 (75.04966ms)

                                                
                                                
-- stdout --
	
	==> Audit <==
	┌─────────┬─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┬──────────────────────┬─────────┬─────────┬─────────────────────┬─────────────────────┐
	│ COMMAND │                                                                      ARGS                                                                       │       PROFILE        │  USER   │ VERSION │     START TIME      │      END TIME       │
	├─────────┼─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┼──────────────────────┼─────────┼─────────┼─────────────────────┼─────────────────────┤
	│ start   │ -o=json --download-only -p download-only-432916 --force --alsologtostderr --kubernetes-version=v1.28.0 --container-runtime=docker --driver=kvm2 │ download-only-432916 │ jenkins │ v1.37.0 │ 13 Dec 25 08:28 UTC │                     │
	│ delete  │ --all                                                                                                                                           │ minikube             │ jenkins │ v1.37.0 │ 13 Dec 25 08:28 UTC │ 13 Dec 25 08:28 UTC │
	│ delete  │ -p download-only-432916                                                                                                                         │ download-only-432916 │ jenkins │ v1.37.0 │ 13 Dec 25 08:28 UTC │ 13 Dec 25 08:28 UTC │
	│ start   │ -o=json --download-only -p download-only-777569 --force --alsologtostderr --kubernetes-version=v1.34.2 --container-runtime=docker --driver=kvm2 │ download-only-777569 │ jenkins │ v1.37.0 │ 13 Dec 25 08:28 UTC │                     │
	└─────────┴─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┴──────────────────────┴─────────┴─────────┴─────────────────────┴─────────────────────┘
	
	
	==> Last Start <==
	Log file created at: 2025/12/13 08:28:58
	Running on machine: ubuntu-20-agent-3
	Binary: Built with gc go1.25.5 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1213 08:28:58.202692   13532 out.go:360] Setting OutFile to fd 1 ...
	I1213 08:28:58.202788   13532 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1213 08:28:58.202792   13532 out.go:374] Setting ErrFile to fd 2...
	I1213 08:28:58.202797   13532 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1213 08:28:58.202976   13532 root.go:338] Updating PATH: /home/jenkins/minikube-integration/22128-9390/.minikube/bin
	I1213 08:28:58.203434   13532 out.go:368] Setting JSON to true
	I1213 08:28:58.204259   13532 start.go:133] hostinfo: {"hostname":"ubuntu-20-agent-3","uptime":688,"bootTime":1765613850,"procs":173,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"22.04","kernelVersion":"6.8.0-1045-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I1213 08:28:58.204360   13532 start.go:143] virtualization: kvm guest
	I1213 08:28:58.206527   13532 out.go:99] [download-only-777569] minikube v1.37.0 on Ubuntu 22.04 (kvm/amd64)
	I1213 08:28:58.206761   13532 notify.go:221] Checking for updates...
	I1213 08:28:58.208184   13532 out.go:171] MINIKUBE_LOCATION=22128
	I1213 08:28:58.210303   13532 out.go:171] MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1213 08:28:58.211746   13532 out.go:171] KUBECONFIG=/home/jenkins/minikube-integration/22128-9390/kubeconfig
	I1213 08:28:58.213314   13532 out.go:171] MINIKUBE_HOME=/home/jenkins/minikube-integration/22128-9390/.minikube
	I1213 08:28:58.214759   13532 out.go:171] MINIKUBE_BIN=out/minikube-linux-amd64
	
	
	* The control-plane node download-only-777569 host does not exist
	  To start a cluster, run: "minikube start -p download-only-777569"

                                                
                                                
-- /stdout --
aaa_download_only_test.go:184: minikube logs failed with error: exit status 85
--- PASS: TestDownloadOnly/v1.34.2/LogsDuration (0.08s)

                                                
                                    
x
+
TestDownloadOnly/v1.34.2/DeleteAll (0.17s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.34.2/DeleteAll
aaa_download_only_test.go:196: (dbg) Run:  out/minikube-linux-amd64 delete --all
--- PASS: TestDownloadOnly/v1.34.2/DeleteAll (0.17s)

                                                
                                    
x
+
TestDownloadOnly/v1.34.2/DeleteAlwaysSucceeds (0.16s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.34.2/DeleteAlwaysSucceeds
aaa_download_only_test.go:207: (dbg) Run:  out/minikube-linux-amd64 delete -p download-only-777569
--- PASS: TestDownloadOnly/v1.34.2/DeleteAlwaysSucceeds (0.16s)

                                                
                                    
x
+
TestDownloadOnly/v1.35.0-beta.0/json-events (2.86s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.35.0-beta.0/json-events
aaa_download_only_test.go:80: (dbg) Run:  out/minikube-linux-amd64 start -o=json --download-only -p download-only-669295 --force --alsologtostderr --kubernetes-version=v1.35.0-beta.0 --container-runtime=docker --driver=kvm2 
aaa_download_only_test.go:80: (dbg) Done: out/minikube-linux-amd64 start -o=json --download-only -p download-only-669295 --force --alsologtostderr --kubernetes-version=v1.35.0-beta.0 --container-runtime=docker --driver=kvm2 : (2.859564766s)
--- PASS: TestDownloadOnly/v1.35.0-beta.0/json-events (2.86s)

                                                
                                    
x
+
TestDownloadOnly/v1.35.0-beta.0/preload-exists (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.35.0-beta.0/preload-exists
I1213 08:29:04.188645   13307 preload.go:188] Checking if preload exists for k8s version v1.35.0-beta.0 and runtime docker
I1213 08:29:04.188700   13307 preload.go:203] Found local preload: /home/jenkins/minikube-integration/22128-9390/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.35.0-beta.0-docker-overlay2-amd64.tar.lz4
--- PASS: TestDownloadOnly/v1.35.0-beta.0/preload-exists (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.35.0-beta.0/LogsDuration (0.08s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.35.0-beta.0/LogsDuration
aaa_download_only_test.go:183: (dbg) Run:  out/minikube-linux-amd64 logs -p download-only-669295
aaa_download_only_test.go:183: (dbg) Non-zero exit: out/minikube-linux-amd64 logs -p download-only-669295: exit status 85 (75.82331ms)

                                                
                                                
-- stdout --
	
	==> Audit <==
	┌─────────┬────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┬──────────────────────┬─────────┬─────────┬─────────────────────┬─────────────────────┐
	│ COMMAND │                                                                          ARGS                                                                          │       PROFILE        │  USER   │ VERSION │     START TIME      │      END TIME       │
	├─────────┼────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┼──────────────────────┼─────────┼─────────┼─────────────────────┼─────────────────────┤
	│ start   │ -o=json --download-only -p download-only-432916 --force --alsologtostderr --kubernetes-version=v1.28.0 --container-runtime=docker --driver=kvm2        │ download-only-432916 │ jenkins │ v1.37.0 │ 13 Dec 25 08:28 UTC │                     │
	│ delete  │ --all                                                                                                                                                  │ minikube             │ jenkins │ v1.37.0 │ 13 Dec 25 08:28 UTC │ 13 Dec 25 08:28 UTC │
	│ delete  │ -p download-only-432916                                                                                                                                │ download-only-432916 │ jenkins │ v1.37.0 │ 13 Dec 25 08:28 UTC │ 13 Dec 25 08:28 UTC │
	│ start   │ -o=json --download-only -p download-only-777569 --force --alsologtostderr --kubernetes-version=v1.34.2 --container-runtime=docker --driver=kvm2        │ download-only-777569 │ jenkins │ v1.37.0 │ 13 Dec 25 08:28 UTC │                     │
	│ delete  │ --all                                                                                                                                                  │ minikube             │ jenkins │ v1.37.0 │ 13 Dec 25 08:29 UTC │ 13 Dec 25 08:29 UTC │
	│ delete  │ -p download-only-777569                                                                                                                                │ download-only-777569 │ jenkins │ v1.37.0 │ 13 Dec 25 08:29 UTC │ 13 Dec 25 08:29 UTC │
	│ start   │ -o=json --download-only -p download-only-669295 --force --alsologtostderr --kubernetes-version=v1.35.0-beta.0 --container-runtime=docker --driver=kvm2 │ download-only-669295 │ jenkins │ v1.37.0 │ 13 Dec 25 08:29 UTC │                     │
	└─────────┴────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┴──────────────────────┴─────────┴─────────┴─────────────────────┴─────────────────────┘
	
	
	==> Last Start <==
	Log file created at: 2025/12/13 08:29:01
	Running on machine: ubuntu-20-agent-3
	Binary: Built with gc go1.25.5 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1213 08:29:01.386821   13693 out.go:360] Setting OutFile to fd 1 ...
	I1213 08:29:01.387083   13693 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1213 08:29:01.387094   13693 out.go:374] Setting ErrFile to fd 2...
	I1213 08:29:01.387099   13693 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1213 08:29:01.387293   13693 root.go:338] Updating PATH: /home/jenkins/minikube-integration/22128-9390/.minikube/bin
	I1213 08:29:01.387872   13693 out.go:368] Setting JSON to true
	I1213 08:29:01.388792   13693 start.go:133] hostinfo: {"hostname":"ubuntu-20-agent-3","uptime":691,"bootTime":1765613850,"procs":173,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"22.04","kernelVersion":"6.8.0-1045-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I1213 08:29:01.388862   13693 start.go:143] virtualization: kvm guest
	I1213 08:29:01.390849   13693 out.go:99] [download-only-669295] minikube v1.37.0 on Ubuntu 22.04 (kvm/amd64)
	I1213 08:29:01.391040   13693 notify.go:221] Checking for updates...
	I1213 08:29:01.392885   13693 out.go:171] MINIKUBE_LOCATION=22128
	I1213 08:29:01.394518   13693 out.go:171] MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1213 08:29:01.398111   13693 out.go:171] KUBECONFIG=/home/jenkins/minikube-integration/22128-9390/kubeconfig
	I1213 08:29:01.399650   13693 out.go:171] MINIKUBE_HOME=/home/jenkins/minikube-integration/22128-9390/.minikube
	I1213 08:29:01.401010   13693 out.go:171] MINIKUBE_BIN=out/minikube-linux-amd64
	
	
	* The control-plane node download-only-669295 host does not exist
	  To start a cluster, run: "minikube start -p download-only-669295"

                                                
                                                
-- /stdout --
aaa_download_only_test.go:184: minikube logs failed with error: exit status 85
--- PASS: TestDownloadOnly/v1.35.0-beta.0/LogsDuration (0.08s)

                                                
                                    
x
+
TestDownloadOnly/v1.35.0-beta.0/DeleteAll (0.17s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.35.0-beta.0/DeleteAll
aaa_download_only_test.go:196: (dbg) Run:  out/minikube-linux-amd64 delete --all
--- PASS: TestDownloadOnly/v1.35.0-beta.0/DeleteAll (0.17s)

                                                
                                    
x
+
TestDownloadOnly/v1.35.0-beta.0/DeleteAlwaysSucceeds (0.16s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.35.0-beta.0/DeleteAlwaysSucceeds
aaa_download_only_test.go:207: (dbg) Run:  out/minikube-linux-amd64 delete -p download-only-669295
--- PASS: TestDownloadOnly/v1.35.0-beta.0/DeleteAlwaysSucceeds (0.16s)

                                                
                                    
x
+
TestBinaryMirror (0.66s)

                                                
                                                
=== RUN   TestBinaryMirror
I1213 08:29:05.050463   13307 binary.go:80] Not caching binary, using https://dl.k8s.io/release/v1.34.2/bin/linux/amd64/kubectl?checksum=file:https://dl.k8s.io/release/v1.34.2/bin/linux/amd64/kubectl.sha256
aaa_download_only_test.go:309: (dbg) Run:  out/minikube-linux-amd64 start --download-only -p binary-mirror-796472 --alsologtostderr --binary-mirror http://127.0.0.1:42831 --driver=kvm2 
helpers_test.go:176: Cleaning up "binary-mirror-796472" profile ...
helpers_test.go:179: (dbg) Run:  out/minikube-linux-amd64 delete -p binary-mirror-796472
--- PASS: TestBinaryMirror (0.66s)

                                                
                                    
x
+
TestOffline (110.79s)

                                                
                                                
=== RUN   TestOffline
=== PAUSE TestOffline

                                                
                                                

                                                
                                                
=== CONT  TestOffline
aab_offline_test.go:55: (dbg) Run:  out/minikube-linux-amd64 start -p offline-docker-725821 --alsologtostderr -v=1 --memory=3072 --wait=true --driver=kvm2 
aab_offline_test.go:55: (dbg) Done: out/minikube-linux-amd64 start -p offline-docker-725821 --alsologtostderr -v=1 --memory=3072 --wait=true --driver=kvm2 : (1m48.843791717s)
helpers_test.go:176: Cleaning up "offline-docker-725821" profile ...
helpers_test.go:179: (dbg) Run:  out/minikube-linux-amd64 delete -p offline-docker-725821
helpers_test.go:179: (dbg) Done: out/minikube-linux-amd64 delete -p offline-docker-725821: (1.941304587s)
--- PASS: TestOffline (110.79s)

                                                
                                    
x
+
TestAddons/PreSetup/EnablingAddonOnNonExistingCluster (0.07s)

                                                
                                                
=== RUN   TestAddons/PreSetup/EnablingAddonOnNonExistingCluster
=== PAUSE TestAddons/PreSetup/EnablingAddonOnNonExistingCluster

                                                
                                                

                                                
                                                
=== CONT  TestAddons/PreSetup/EnablingAddonOnNonExistingCluster
addons_test.go:1002: (dbg) Run:  out/minikube-linux-amd64 addons enable dashboard -p addons-527167
addons_test.go:1002: (dbg) Non-zero exit: out/minikube-linux-amd64 addons enable dashboard -p addons-527167: exit status 85 (66.449809ms)

                                                
                                                
-- stdout --
	* Profile "addons-527167" not found. Run "minikube profile list" to view all profiles.
	  To start a cluster, run: "minikube start -p addons-527167"

                                                
                                                
-- /stdout --
--- PASS: TestAddons/PreSetup/EnablingAddonOnNonExistingCluster (0.07s)

                                                
                                    
x
+
TestAddons/PreSetup/DisablingAddonOnNonExistingCluster (0.07s)

                                                
                                                
=== RUN   TestAddons/PreSetup/DisablingAddonOnNonExistingCluster
=== PAUSE TestAddons/PreSetup/DisablingAddonOnNonExistingCluster

                                                
                                                

                                                
                                                
=== CONT  TestAddons/PreSetup/DisablingAddonOnNonExistingCluster
addons_test.go:1013: (dbg) Run:  out/minikube-linux-amd64 addons disable dashboard -p addons-527167
addons_test.go:1013: (dbg) Non-zero exit: out/minikube-linux-amd64 addons disable dashboard -p addons-527167: exit status 85 (66.194389ms)

                                                
                                                
-- stdout --
	* Profile "addons-527167" not found. Run "minikube profile list" to view all profiles.
	  To start a cluster, run: "minikube start -p addons-527167"

                                                
                                                
-- /stdout --
--- PASS: TestAddons/PreSetup/DisablingAddonOnNonExistingCluster (0.07s)

                                                
                                    
x
+
TestAddons/Setup (211.11s)

                                                
                                                
=== RUN   TestAddons/Setup
addons_test.go:110: (dbg) Run:  out/minikube-linux-amd64 start -p addons-527167 --wait=true --memory=4096 --alsologtostderr --addons=registry --addons=registry-creds --addons=metrics-server --addons=volumesnapshots --addons=csi-hostpath-driver --addons=gcp-auth --addons=cloud-spanner --addons=inspektor-gadget --addons=nvidia-device-plugin --addons=yakd --addons=volcano --addons=amd-gpu-device-plugin --driver=kvm2  --addons=ingress --addons=ingress-dns --addons=storage-provisioner-rancher
addons_test.go:110: (dbg) Done: out/minikube-linux-amd64 start -p addons-527167 --wait=true --memory=4096 --alsologtostderr --addons=registry --addons=registry-creds --addons=metrics-server --addons=volumesnapshots --addons=csi-hostpath-driver --addons=gcp-auth --addons=cloud-spanner --addons=inspektor-gadget --addons=nvidia-device-plugin --addons=yakd --addons=volcano --addons=amd-gpu-device-plugin --driver=kvm2  --addons=ingress --addons=ingress-dns --addons=storage-provisioner-rancher: (3m31.105873171s)
--- PASS: TestAddons/Setup (211.11s)

                                                
                                    
x
+
TestAddons/serial/Volcano (45.76s)

                                                
                                                
=== RUN   TestAddons/serial/Volcano
addons_test.go:878: volcano-admission stabilized in 27.619143ms
addons_test.go:886: volcano-controller stabilized in 27.667551ms
addons_test.go:870: volcano-scheduler stabilized in 30.698738ms
addons_test.go:892: (dbg) TestAddons/serial/Volcano: waiting 6m0s for pods matching "app=volcano-scheduler" in namespace "volcano-system" ...
helpers_test.go:353: "volcano-scheduler-76c996c8bf-tmpvh" [773f3bd9-5446-4366-9db9-07a6c5a1174f] Running
addons_test.go:892: (dbg) TestAddons/serial/Volcano: app=volcano-scheduler healthy within 6.006220734s
addons_test.go:896: (dbg) TestAddons/serial/Volcano: waiting 6m0s for pods matching "app=volcano-admission" in namespace "volcano-system" ...
helpers_test.go:353: "volcano-admission-6c447bd768-v7sqc" [1e31d682-a681-48ce-9e74-f8e219621f44] Running
addons_test.go:896: (dbg) TestAddons/serial/Volcano: app=volcano-admission healthy within 5.00814029s
addons_test.go:900: (dbg) TestAddons/serial/Volcano: waiting 6m0s for pods matching "app=volcano-controller" in namespace "volcano-system" ...
helpers_test.go:353: "volcano-controllers-6fd4f85cb8-mlws8" [44e9de1f-de2a-42c4-aab1-544353bba338] Running
addons_test.go:900: (dbg) TestAddons/serial/Volcano: app=volcano-controller healthy within 5.005318671s
addons_test.go:905: (dbg) Run:  kubectl --context addons-527167 delete -n volcano-system job volcano-admission-init
addons_test.go:911: (dbg) Run:  kubectl --context addons-527167 create -f testdata/vcjob.yaml
addons_test.go:919: (dbg) Run:  kubectl --context addons-527167 get vcjob -n my-volcano
addons_test.go:937: (dbg) TestAddons/serial/Volcano: waiting 3m0s for pods matching "volcano.sh/job-name=test-job" in namespace "my-volcano" ...
helpers_test.go:353: "test-job-nginx-0" [03dd22d4-5081-4417-aa9c-df23627e2340] Pending
helpers_test.go:353: "test-job-nginx-0" [03dd22d4-5081-4417-aa9c-df23627e2340] Pending / Ready:ContainersNotReady (containers with unready status: [nginx]) / ContainersReady:ContainersNotReady (containers with unready status: [nginx])
helpers_test.go:353: "test-job-nginx-0" [03dd22d4-5081-4417-aa9c-df23627e2340] Running
addons_test.go:937: (dbg) TestAddons/serial/Volcano: volcano.sh/job-name=test-job healthy within 17.004294529s
addons_test.go:1055: (dbg) Run:  out/minikube-linux-amd64 -p addons-527167 addons disable volcano --alsologtostderr -v=1
addons_test.go:1055: (dbg) Done: out/minikube-linux-amd64 -p addons-527167 addons disable volcano --alsologtostderr -v=1: (12.211532395s)
--- PASS: TestAddons/serial/Volcano (45.76s)

                                                
                                    
x
+
TestAddons/serial/GCPAuth/Namespaces (0.13s)

                                                
                                                
=== RUN   TestAddons/serial/GCPAuth/Namespaces
addons_test.go:632: (dbg) Run:  kubectl --context addons-527167 create ns new-namespace
addons_test.go:646: (dbg) Run:  kubectl --context addons-527167 get secret gcp-auth -n new-namespace
--- PASS: TestAddons/serial/GCPAuth/Namespaces (0.13s)

                                                
                                    
x
+
TestAddons/serial/GCPAuth/FakeCredentials (9.62s)

                                                
                                                
=== RUN   TestAddons/serial/GCPAuth/FakeCredentials
addons_test.go:677: (dbg) Run:  kubectl --context addons-527167 create -f testdata/busybox.yaml
addons_test.go:684: (dbg) Run:  kubectl --context addons-527167 create sa gcp-auth-test
addons_test.go:690: (dbg) TestAddons/serial/GCPAuth/FakeCredentials: waiting 8m0s for pods matching "integration-test=busybox" in namespace "default" ...
helpers_test.go:353: "busybox" [a5a96e52-c8d0-4aa7-81ac-9e1e2f629605] Pending / Ready:ContainersNotReady (containers with unready status: [busybox]) / ContainersReady:ContainersNotReady (containers with unready status: [busybox])
helpers_test.go:353: "busybox" [a5a96e52-c8d0-4aa7-81ac-9e1e2f629605] Running
addons_test.go:690: (dbg) TestAddons/serial/GCPAuth/FakeCredentials: integration-test=busybox healthy within 9.005195792s
addons_test.go:696: (dbg) Run:  kubectl --context addons-527167 exec busybox -- /bin/sh -c "printenv GOOGLE_APPLICATION_CREDENTIALS"
addons_test.go:708: (dbg) Run:  kubectl --context addons-527167 describe sa gcp-auth-test
addons_test.go:746: (dbg) Run:  kubectl --context addons-527167 exec busybox -- /bin/sh -c "printenv GOOGLE_CLOUD_PROJECT"
--- PASS: TestAddons/serial/GCPAuth/FakeCredentials (9.62s)

                                                
                                    
x
+
TestAddons/parallel/Registry (17.6s)

                                                
                                                
=== RUN   TestAddons/parallel/Registry
=== PAUSE TestAddons/parallel/Registry

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/Registry
addons_test.go:384: registry stabilized in 11.715377ms
addons_test.go:386: (dbg) TestAddons/parallel/Registry: waiting 6m0s for pods matching "actual-registry=true" in namespace "kube-system" ...
helpers_test.go:353: "registry-6b586f9694-4sjc2" [945bc911-c51b-484c-84ff-dc9f4e1794a2] Running
addons_test.go:386: (dbg) TestAddons/parallel/Registry: actual-registry=true healthy within 5.009667956s
addons_test.go:389: (dbg) TestAddons/parallel/Registry: waiting 10m0s for pods matching "registry-proxy=true" in namespace "kube-system" ...
helpers_test.go:353: "registry-proxy-2gk4j" [550575df-f118-410c-ba54-0b4a9876d8b0] Running
addons_test.go:389: (dbg) TestAddons/parallel/Registry: registry-proxy=true healthy within 6.00568073s
addons_test.go:394: (dbg) Run:  kubectl --context addons-527167 delete po -l run=registry-test --now
addons_test.go:399: (dbg) Run:  kubectl --context addons-527167 run --rm registry-test --restart=Never --image=gcr.io/k8s-minikube/busybox -it -- sh -c "wget --spider -S http://registry.kube-system.svc.cluster.local"
addons_test.go:399: (dbg) Done: kubectl --context addons-527167 run --rm registry-test --restart=Never --image=gcr.io/k8s-minikube/busybox -it -- sh -c "wget --spider -S http://registry.kube-system.svc.cluster.local": (5.830450075s)
addons_test.go:413: (dbg) Run:  out/minikube-linux-amd64 -p addons-527167 ip
2025/12/13 08:33:58 [DEBUG] GET http://192.168.39.154:5000
addons_test.go:1055: (dbg) Run:  out/minikube-linux-amd64 -p addons-527167 addons disable registry --alsologtostderr -v=1
--- PASS: TestAddons/parallel/Registry (17.60s)

                                                
                                    
x
+
TestAddons/parallel/RegistryCreds (0.58s)

                                                
                                                
=== RUN   TestAddons/parallel/RegistryCreds
=== PAUSE TestAddons/parallel/RegistryCreds

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/RegistryCreds
addons_test.go:325: registry-creds stabilized in 9.664759ms
addons_test.go:327: (dbg) Run:  out/minikube-linux-amd64 addons configure registry-creds -f ./testdata/addons_testconfig.json -p addons-527167
addons_test.go:334: (dbg) Run:  kubectl --context addons-527167 -n kube-system get secret -o yaml
addons_test.go:1055: (dbg) Run:  out/minikube-linux-amd64 -p addons-527167 addons disable registry-creds --alsologtostderr -v=1
--- PASS: TestAddons/parallel/RegistryCreds (0.58s)

                                                
                                    
x
+
TestAddons/parallel/Ingress (20.71s)

                                                
                                                
=== RUN   TestAddons/parallel/Ingress
=== PAUSE TestAddons/parallel/Ingress

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/Ingress
addons_test.go:211: (dbg) Run:  kubectl --context addons-527167 wait --for=condition=ready --namespace=ingress-nginx pod --selector=app.kubernetes.io/component=controller --timeout=90s
addons_test.go:236: (dbg) Run:  kubectl --context addons-527167 replace --force -f testdata/nginx-ingress-v1.yaml
addons_test.go:249: (dbg) Run:  kubectl --context addons-527167 replace --force -f testdata/nginx-pod-svc.yaml
addons_test.go:254: (dbg) TestAddons/parallel/Ingress: waiting 8m0s for pods matching "run=nginx" in namespace "default" ...
helpers_test.go:353: "nginx" [dba1842c-3411-4a49-b68d-989817c762e2] Pending / Ready:ContainersNotReady (containers with unready status: [nginx]) / ContainersReady:ContainersNotReady (containers with unready status: [nginx])
helpers_test.go:353: "nginx" [dba1842c-3411-4a49-b68d-989817c762e2] Running
addons_test.go:254: (dbg) TestAddons/parallel/Ingress: run=nginx healthy within 10.004070403s
I1213 08:34:16.620047   13307 kapi.go:150] Service nginx in namespace default found.
addons_test.go:266: (dbg) Run:  out/minikube-linux-amd64 -p addons-527167 ssh "curl -s http://127.0.0.1/ -H 'Host: nginx.example.com'"
addons_test.go:290: (dbg) Run:  kubectl --context addons-527167 replace --force -f testdata/ingress-dns-example-v1.yaml
addons_test.go:295: (dbg) Run:  out/minikube-linux-amd64 -p addons-527167 ip
addons_test.go:301: (dbg) Run:  nslookup hello-john.test 192.168.39.154
addons_test.go:1055: (dbg) Run:  out/minikube-linux-amd64 -p addons-527167 addons disable ingress-dns --alsologtostderr -v=1
addons_test.go:1055: (dbg) Done: out/minikube-linux-amd64 -p addons-527167 addons disable ingress-dns --alsologtostderr -v=1: (1.181553684s)
addons_test.go:1055: (dbg) Run:  out/minikube-linux-amd64 -p addons-527167 addons disable ingress --alsologtostderr -v=1
addons_test.go:1055: (dbg) Done: out/minikube-linux-amd64 -p addons-527167 addons disable ingress --alsologtostderr -v=1: (8.098004865s)
--- PASS: TestAddons/parallel/Ingress (20.71s)

                                                
                                    
x
+
TestAddons/parallel/InspektorGadget (11.01s)

                                                
                                                
=== RUN   TestAddons/parallel/InspektorGadget
=== PAUSE TestAddons/parallel/InspektorGadget

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/InspektorGadget
addons_test.go:825: (dbg) TestAddons/parallel/InspektorGadget: waiting 8m0s for pods matching "k8s-app=gadget" in namespace "gadget" ...
helpers_test.go:353: "gadget-8fv7m" [60df6b39-125e-4c8c-b4e3-dd5f9a6870b5] Running
addons_test.go:825: (dbg) TestAddons/parallel/InspektorGadget: k8s-app=gadget healthy within 5.013608905s
addons_test.go:1055: (dbg) Run:  out/minikube-linux-amd64 -p addons-527167 addons disable inspektor-gadget --alsologtostderr -v=1
addons_test.go:1055: (dbg) Done: out/minikube-linux-amd64 -p addons-527167 addons disable inspektor-gadget --alsologtostderr -v=1: (5.993149132s)
--- PASS: TestAddons/parallel/InspektorGadget (11.01s)

                                                
                                    
x
+
TestAddons/parallel/MetricsServer (6.76s)

                                                
                                                
=== RUN   TestAddons/parallel/MetricsServer
=== PAUSE TestAddons/parallel/MetricsServer

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/MetricsServer
addons_test.go:457: metrics-server stabilized in 7.051489ms
addons_test.go:459: (dbg) TestAddons/parallel/MetricsServer: waiting 6m0s for pods matching "k8s-app=metrics-server" in namespace "kube-system" ...
helpers_test.go:353: "metrics-server-85b7d694d7-hxwrc" [4bc10022-242a-41c3-9250-a1323ac6d7a8] Running
addons_test.go:459: (dbg) TestAddons/parallel/MetricsServer: k8s-app=metrics-server healthy within 6.006759812s
addons_test.go:465: (dbg) Run:  kubectl --context addons-527167 top pods -n kube-system
addons_test.go:1055: (dbg) Run:  out/minikube-linux-amd64 -p addons-527167 addons disable metrics-server --alsologtostderr -v=1
--- PASS: TestAddons/parallel/MetricsServer (6.76s)

                                                
                                    
x
+
TestAddons/parallel/CSI (46.62s)

                                                
                                                
=== RUN   TestAddons/parallel/CSI
=== PAUSE TestAddons/parallel/CSI

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/CSI
I1213 08:33:59.580561   13307 kapi.go:75] Waiting for pod with label "kubernetes.io/minikube-addons=csi-hostpath-driver" in ns "kube-system" ...
I1213 08:33:59.590604   13307 kapi.go:86] Found 3 Pods for label selector kubernetes.io/minikube-addons=csi-hostpath-driver
I1213 08:33:59.590631   13307 kapi.go:107] duration metric: took 10.081244ms to wait for kubernetes.io/minikube-addons=csi-hostpath-driver ...
addons_test.go:551: csi-hostpath-driver pods stabilized in 10.0898ms
addons_test.go:554: (dbg) Run:  kubectl --context addons-527167 create -f testdata/csi-hostpath-driver/pvc.yaml
addons_test.go:559: (dbg) TestAddons/parallel/CSI: waiting 6m0s for pvc "hpvc" in namespace "default" ...
helpers_test.go:403: (dbg) Run:  kubectl --context addons-527167 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:403: (dbg) Run:  kubectl --context addons-527167 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:403: (dbg) Run:  kubectl --context addons-527167 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:403: (dbg) Run:  kubectl --context addons-527167 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:403: (dbg) Run:  kubectl --context addons-527167 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:403: (dbg) Run:  kubectl --context addons-527167 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:403: (dbg) Run:  kubectl --context addons-527167 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:403: (dbg) Run:  kubectl --context addons-527167 get pvc hpvc -o jsonpath={.status.phase} -n default
addons_test.go:564: (dbg) Run:  kubectl --context addons-527167 create -f testdata/csi-hostpath-driver/pv-pod.yaml
addons_test.go:569: (dbg) TestAddons/parallel/CSI: waiting 6m0s for pods matching "app=task-pv-pod" in namespace "default" ...
helpers_test.go:353: "task-pv-pod" [99b19df6-91b7-4678-8413-a61e90c71adb] Pending
helpers_test.go:353: "task-pv-pod" [99b19df6-91b7-4678-8413-a61e90c71adb] Pending / Ready:ContainersNotReady (containers with unready status: [task-pv-container]) / ContainersReady:ContainersNotReady (containers with unready status: [task-pv-container])
helpers_test.go:353: "task-pv-pod" [99b19df6-91b7-4678-8413-a61e90c71adb] Running
addons_test.go:569: (dbg) TestAddons/parallel/CSI: app=task-pv-pod healthy within 10.01808806s
addons_test.go:574: (dbg) Run:  kubectl --context addons-527167 create -f testdata/csi-hostpath-driver/snapshot.yaml
addons_test.go:579: (dbg) TestAddons/parallel/CSI: waiting 6m0s for volume snapshot "new-snapshot-demo" in namespace "default" ...
helpers_test.go:428: (dbg) Run:  kubectl --context addons-527167 get volumesnapshot new-snapshot-demo -o jsonpath={.status.readyToUse} -n default
helpers_test.go:436: TestAddons/parallel/CSI: WARNING: volume snapshot get for "default" "new-snapshot-demo" returned: 
helpers_test.go:428: (dbg) Run:  kubectl --context addons-527167 get volumesnapshot new-snapshot-demo -o jsonpath={.status.readyToUse} -n default
addons_test.go:584: (dbg) Run:  kubectl --context addons-527167 delete pod task-pv-pod
addons_test.go:584: (dbg) Done: kubectl --context addons-527167 delete pod task-pv-pod: (1.829406424s)
addons_test.go:590: (dbg) Run:  kubectl --context addons-527167 delete pvc hpvc
addons_test.go:596: (dbg) Run:  kubectl --context addons-527167 create -f testdata/csi-hostpath-driver/pvc-restore.yaml
addons_test.go:601: (dbg) TestAddons/parallel/CSI: waiting 6m0s for pvc "hpvc-restore" in namespace "default" ...
helpers_test.go:403: (dbg) Run:  kubectl --context addons-527167 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:403: (dbg) Run:  kubectl --context addons-527167 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:403: (dbg) Run:  kubectl --context addons-527167 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:403: (dbg) Run:  kubectl --context addons-527167 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:403: (dbg) Run:  kubectl --context addons-527167 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:403: (dbg) Run:  kubectl --context addons-527167 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:403: (dbg) Run:  kubectl --context addons-527167 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:403: (dbg) Run:  kubectl --context addons-527167 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:403: (dbg) Run:  kubectl --context addons-527167 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:403: (dbg) Run:  kubectl --context addons-527167 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
addons_test.go:606: (dbg) Run:  kubectl --context addons-527167 create -f testdata/csi-hostpath-driver/pv-pod-restore.yaml
addons_test.go:611: (dbg) TestAddons/parallel/CSI: waiting 6m0s for pods matching "app=task-pv-pod-restore" in namespace "default" ...
helpers_test.go:353: "task-pv-pod-restore" [6b04879e-a83e-4c77-afd8-c5e12a2d427c] Pending
helpers_test.go:353: "task-pv-pod-restore" [6b04879e-a83e-4c77-afd8-c5e12a2d427c] Pending / Ready:ContainersNotReady (containers with unready status: [task-pv-container]) / ContainersReady:ContainersNotReady (containers with unready status: [task-pv-container])
helpers_test.go:353: "task-pv-pod-restore" [6b04879e-a83e-4c77-afd8-c5e12a2d427c] Running
addons_test.go:611: (dbg) TestAddons/parallel/CSI: app=task-pv-pod-restore healthy within 7.005812441s
addons_test.go:616: (dbg) Run:  kubectl --context addons-527167 delete pod task-pv-pod-restore
addons_test.go:616: (dbg) Done: kubectl --context addons-527167 delete pod task-pv-pod-restore: (1.369476437s)
addons_test.go:620: (dbg) Run:  kubectl --context addons-527167 delete pvc hpvc-restore
addons_test.go:624: (dbg) Run:  kubectl --context addons-527167 delete volumesnapshot new-snapshot-demo
addons_test.go:1055: (dbg) Run:  out/minikube-linux-amd64 -p addons-527167 addons disable volumesnapshots --alsologtostderr -v=1
addons_test.go:1055: (dbg) Run:  out/minikube-linux-amd64 -p addons-527167 addons disable csi-hostpath-driver --alsologtostderr -v=1
addons_test.go:1055: (dbg) Done: out/minikube-linux-amd64 -p addons-527167 addons disable csi-hostpath-driver --alsologtostderr -v=1: (7.178030187s)
--- PASS: TestAddons/parallel/CSI (46.62s)

                                                
                                    
x
+
TestAddons/parallel/Headlamp (24.17s)

                                                
                                                
=== RUN   TestAddons/parallel/Headlamp
=== PAUSE TestAddons/parallel/Headlamp

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/Headlamp
addons_test.go:810: (dbg) Run:  out/minikube-linux-amd64 addons enable headlamp -p addons-527167 --alsologtostderr -v=1
addons_test.go:810: (dbg) Done: out/minikube-linux-amd64 addons enable headlamp -p addons-527167 --alsologtostderr -v=1: (1.187457986s)
addons_test.go:815: (dbg) TestAddons/parallel/Headlamp: waiting 8m0s for pods matching "app.kubernetes.io/name=headlamp" in namespace "headlamp" ...
helpers_test.go:353: "headlamp-dfcdc64b-g4r8h" [847694fb-caf2-4a33-8242-f98a3478a4ab] Pending / Ready:ContainersNotReady (containers with unready status: [headlamp]) / ContainersReady:ContainersNotReady (containers with unready status: [headlamp])
helpers_test.go:353: "headlamp-dfcdc64b-g4r8h" [847694fb-caf2-4a33-8242-f98a3478a4ab] Running / Ready:ContainersNotReady (containers with unready status: [headlamp]) / ContainersReady:ContainersNotReady (containers with unready status: [headlamp])
helpers_test.go:353: "headlamp-dfcdc64b-g4r8h" [847694fb-caf2-4a33-8242-f98a3478a4ab] Running
addons_test.go:815: (dbg) TestAddons/parallel/Headlamp: app.kubernetes.io/name=headlamp healthy within 17.008416118s
addons_test.go:1055: (dbg) Run:  out/minikube-linux-amd64 -p addons-527167 addons disable headlamp --alsologtostderr -v=1
addons_test.go:1055: (dbg) Done: out/minikube-linux-amd64 -p addons-527167 addons disable headlamp --alsologtostderr -v=1: (5.976951232s)
--- PASS: TestAddons/parallel/Headlamp (24.17s)

                                                
                                    
x
+
TestAddons/parallel/CloudSpanner (6.51s)

                                                
                                                
=== RUN   TestAddons/parallel/CloudSpanner
=== PAUSE TestAddons/parallel/CloudSpanner

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/CloudSpanner
addons_test.go:842: (dbg) TestAddons/parallel/CloudSpanner: waiting 6m0s for pods matching "app=cloud-spanner-emulator" in namespace "default" ...
helpers_test.go:353: "cloud-spanner-emulator-5bdddb765-4bcng" [e797dc10-8dd3-40fb-b341-acd3b77befb6] Running
addons_test.go:842: (dbg) TestAddons/parallel/CloudSpanner: app=cloud-spanner-emulator healthy within 6.003943815s
addons_test.go:1055: (dbg) Run:  out/minikube-linux-amd64 -p addons-527167 addons disable cloud-spanner --alsologtostderr -v=1
--- PASS: TestAddons/parallel/CloudSpanner (6.51s)

                                                
                                    
x
+
TestAddons/parallel/LocalPath (59.01s)

                                                
                                                
=== RUN   TestAddons/parallel/LocalPath
=== PAUSE TestAddons/parallel/LocalPath

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/LocalPath
addons_test.go:951: (dbg) Run:  kubectl --context addons-527167 apply -f testdata/storage-provisioner-rancher/pvc.yaml
addons_test.go:957: (dbg) Run:  kubectl --context addons-527167 apply -f testdata/storage-provisioner-rancher/pod.yaml
addons_test.go:961: (dbg) TestAddons/parallel/LocalPath: waiting 5m0s for pvc "test-pvc" in namespace "default" ...
helpers_test.go:403: (dbg) Run:  kubectl --context addons-527167 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:403: (dbg) Run:  kubectl --context addons-527167 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:403: (dbg) Run:  kubectl --context addons-527167 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:403: (dbg) Run:  kubectl --context addons-527167 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:403: (dbg) Run:  kubectl --context addons-527167 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:403: (dbg) Run:  kubectl --context addons-527167 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:403: (dbg) Run:  kubectl --context addons-527167 get pvc test-pvc -o jsonpath={.status.phase} -n default
addons_test.go:964: (dbg) TestAddons/parallel/LocalPath: waiting 3m0s for pods matching "run=test-local-path" in namespace "default" ...
helpers_test.go:353: "test-local-path" [6e192953-5cfc-4197-ae86-0ca75dc20677] Pending / Ready:ContainersNotReady (containers with unready status: [busybox]) / ContainersReady:ContainersNotReady (containers with unready status: [busybox])
helpers_test.go:353: "test-local-path" [6e192953-5cfc-4197-ae86-0ca75dc20677] Pending / Initialized:PodCompleted / Ready:PodCompleted / ContainersReady:PodCompleted
helpers_test.go:353: "test-local-path" [6e192953-5cfc-4197-ae86-0ca75dc20677] Succeeded / Initialized:PodCompleted / Ready:PodCompleted / ContainersReady:PodCompleted
addons_test.go:964: (dbg) TestAddons/parallel/LocalPath: run=test-local-path healthy within 8.006577398s
addons_test.go:969: (dbg) Run:  kubectl --context addons-527167 get pvc test-pvc -o=json
addons_test.go:978: (dbg) Run:  out/minikube-linux-amd64 -p addons-527167 ssh "cat /opt/local-path-provisioner/pvc-504f480d-0917-44c2-800d-feb4514a076a_default_test-pvc/file1"
addons_test.go:990: (dbg) Run:  kubectl --context addons-527167 delete pod test-local-path
addons_test.go:994: (dbg) Run:  kubectl --context addons-527167 delete pvc test-pvc
addons_test.go:1055: (dbg) Run:  out/minikube-linux-amd64 -p addons-527167 addons disable storage-provisioner-rancher --alsologtostderr -v=1
addons_test.go:1055: (dbg) Done: out/minikube-linux-amd64 -p addons-527167 addons disable storage-provisioner-rancher --alsologtostderr -v=1: (44.154932922s)
--- PASS: TestAddons/parallel/LocalPath (59.01s)

                                                
                                    
x
+
TestAddons/parallel/NvidiaDevicePlugin (6.66s)

                                                
                                                
=== RUN   TestAddons/parallel/NvidiaDevicePlugin
=== PAUSE TestAddons/parallel/NvidiaDevicePlugin

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/NvidiaDevicePlugin
addons_test.go:1027: (dbg) TestAddons/parallel/NvidiaDevicePlugin: waiting 6m0s for pods matching "name=nvidia-device-plugin-ds" in namespace "kube-system" ...
helpers_test.go:353: "nvidia-device-plugin-daemonset-kvp8l" [7a6dd0af-94ae-4247-a754-2c54c0144063] Running
addons_test.go:1027: (dbg) TestAddons/parallel/NvidiaDevicePlugin: name=nvidia-device-plugin-ds healthy within 6.005368458s
addons_test.go:1055: (dbg) Run:  out/minikube-linux-amd64 -p addons-527167 addons disable nvidia-device-plugin --alsologtostderr -v=1
--- PASS: TestAddons/parallel/NvidiaDevicePlugin (6.66s)

                                                
                                    
x
+
TestAddons/parallel/Yakd (11.89s)

                                                
                                                
=== RUN   TestAddons/parallel/Yakd
=== PAUSE TestAddons/parallel/Yakd

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/Yakd
addons_test.go:1049: (dbg) TestAddons/parallel/Yakd: waiting 2m0s for pods matching "app.kubernetes.io/name=yakd-dashboard" in namespace "yakd-dashboard" ...
helpers_test.go:353: "yakd-dashboard-6654c87f9b-jpnmv" [ff24bdbb-f625-49b7-92bb-2388e8dc66ea] Running
addons_test.go:1049: (dbg) TestAddons/parallel/Yakd: app.kubernetes.io/name=yakd-dashboard healthy within 6.004719008s
addons_test.go:1055: (dbg) Run:  out/minikube-linux-amd64 -p addons-527167 addons disable yakd --alsologtostderr -v=1
addons_test.go:1055: (dbg) Done: out/minikube-linux-amd64 -p addons-527167 addons disable yakd --alsologtostderr -v=1: (5.887177844s)
--- PASS: TestAddons/parallel/Yakd (11.89s)

                                                
                                    
x
+
TestAddons/StoppedEnableDisable (13.2s)

                                                
                                                
=== RUN   TestAddons/StoppedEnableDisable
addons_test.go:174: (dbg) Run:  out/minikube-linux-amd64 stop -p addons-527167
addons_test.go:174: (dbg) Done: out/minikube-linux-amd64 stop -p addons-527167: (12.992376562s)
addons_test.go:178: (dbg) Run:  out/minikube-linux-amd64 addons enable dashboard -p addons-527167
addons_test.go:182: (dbg) Run:  out/minikube-linux-amd64 addons disable dashboard -p addons-527167
addons_test.go:187: (dbg) Run:  out/minikube-linux-amd64 addons disable gvisor -p addons-527167
--- PASS: TestAddons/StoppedEnableDisable (13.20s)

                                                
                                    
x
+
TestCertOptions (64.67s)

                                                
                                                
=== RUN   TestCertOptions
=== PAUSE TestCertOptions

                                                
                                                

                                                
                                                
=== CONT  TestCertOptions
cert_options_test.go:49: (dbg) Run:  out/minikube-linux-amd64 start -p cert-options-265868 --memory=3072 --apiserver-ips=127.0.0.1 --apiserver-ips=192.168.15.15 --apiserver-names=localhost --apiserver-names=www.google.com --apiserver-port=8555 --driver=kvm2 
E1213 09:28:44.843754   13307 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22128-9390/.minikube/profiles/functional-888658/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
cert_options_test.go:49: (dbg) Done: out/minikube-linux-amd64 start -p cert-options-265868 --memory=3072 --apiserver-ips=127.0.0.1 --apiserver-ips=192.168.15.15 --apiserver-names=localhost --apiserver-names=www.google.com --apiserver-port=8555 --driver=kvm2 : (1m3.357035487s)
cert_options_test.go:60: (dbg) Run:  out/minikube-linux-amd64 -p cert-options-265868 ssh "openssl x509 -text -noout -in /var/lib/minikube/certs/apiserver.crt"
cert_options_test.go:88: (dbg) Run:  kubectl --context cert-options-265868 config view
cert_options_test.go:100: (dbg) Run:  out/minikube-linux-amd64 ssh -p cert-options-265868 -- "sudo cat /etc/kubernetes/admin.conf"
helpers_test.go:176: Cleaning up "cert-options-265868" profile ...
helpers_test.go:179: (dbg) Run:  out/minikube-linux-amd64 delete -p cert-options-265868
--- PASS: TestCertOptions (64.67s)

                                                
                                    
x
+
TestCertExpiration (311.93s)

                                                
                                                
=== RUN   TestCertExpiration
=== PAUSE TestCertExpiration

                                                
                                                

                                                
                                                
=== CONT  TestCertExpiration
cert_options_test.go:123: (dbg) Run:  out/minikube-linux-amd64 start -p cert-expiration-953432 --memory=3072 --cert-expiration=3m --driver=kvm2 
E1213 09:27:16.340576   13307 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22128-9390/.minikube/profiles/skaffold-452054/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1213 09:27:20.414787   13307 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22128-9390/.minikube/profiles/addons-527167/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
cert_options_test.go:123: (dbg) Done: out/minikube-linux-amd64 start -p cert-expiration-953432 --memory=3072 --cert-expiration=3m --driver=kvm2 : (1m6.528635264s)
cert_options_test.go:131: (dbg) Run:  out/minikube-linux-amd64 start -p cert-expiration-953432 --memory=3072 --cert-expiration=8760h --driver=kvm2 
cert_options_test.go:131: (dbg) Done: out/minikube-linux-amd64 start -p cert-expiration-953432 --memory=3072 --cert-expiration=8760h --driver=kvm2 : (1m4.360994054s)
helpers_test.go:176: Cleaning up "cert-expiration-953432" profile ...
helpers_test.go:179: (dbg) Run:  out/minikube-linux-amd64 delete -p cert-expiration-953432
helpers_test.go:179: (dbg) Done: out/minikube-linux-amd64 delete -p cert-expiration-953432: (1.037657195s)
--- PASS: TestCertExpiration (311.93s)

                                                
                                    
x
+
TestDockerFlags (70.72s)

                                                
                                                
=== RUN   TestDockerFlags
=== PAUSE TestDockerFlags

                                                
                                                

                                                
                                                
=== CONT  TestDockerFlags
docker_test.go:51: (dbg) Run:  out/minikube-linux-amd64 start -p docker-flags-264341 --cache-images=false --memory=3072 --install-addons=false --wait=false --docker-env=FOO=BAR --docker-env=BAZ=BAT --docker-opt=debug --docker-opt=icc=true --alsologtostderr -v=5 --driver=kvm2 
docker_test.go:51: (dbg) Done: out/minikube-linux-amd64 start -p docker-flags-264341 --cache-images=false --memory=3072 --install-addons=false --wait=false --docker-env=FOO=BAR --docker-env=BAZ=BAT --docker-opt=debug --docker-opt=icc=true --alsologtostderr -v=5 --driver=kvm2 : (1m9.344703596s)
docker_test.go:56: (dbg) Run:  out/minikube-linux-amd64 -p docker-flags-264341 ssh "sudo systemctl show docker --property=Environment --no-pager"
docker_test.go:67: (dbg) Run:  out/minikube-linux-amd64 -p docker-flags-264341 ssh "sudo systemctl show docker --property=ExecStart --no-pager"
helpers_test.go:176: Cleaning up "docker-flags-264341" profile ...
helpers_test.go:179: (dbg) Run:  out/minikube-linux-amd64 delete -p docker-flags-264341
--- PASS: TestDockerFlags (70.72s)

                                                
                                    
x
+
TestForceSystemdFlag (72.95s)

                                                
                                                
=== RUN   TestForceSystemdFlag
=== PAUSE TestForceSystemdFlag

                                                
                                                

                                                
                                                
=== CONT  TestForceSystemdFlag
docker_test.go:91: (dbg) Run:  out/minikube-linux-amd64 start -p force-systemd-flag-917619 --memory=3072 --force-systemd --alsologtostderr -v=5 --driver=kvm2 
docker_test.go:91: (dbg) Done: out/minikube-linux-amd64 start -p force-systemd-flag-917619 --memory=3072 --force-systemd --alsologtostderr -v=5 --driver=kvm2 : (1m11.697542026s)
docker_test.go:110: (dbg) Run:  out/minikube-linux-amd64 -p force-systemd-flag-917619 ssh "docker info --format {{.CgroupDriver}}"
helpers_test.go:176: Cleaning up "force-systemd-flag-917619" profile ...
helpers_test.go:179: (dbg) Run:  out/minikube-linux-amd64 delete -p force-systemd-flag-917619
--- PASS: TestForceSystemdFlag (72.95s)

                                                
                                    
x
+
TestForceSystemdEnv (50.5s)

                                                
                                                
=== RUN   TestForceSystemdEnv
=== PAUSE TestForceSystemdEnv

                                                
                                                

                                                
                                                
=== CONT  TestForceSystemdEnv
docker_test.go:155: (dbg) Run:  out/minikube-linux-amd64 start -p force-systemd-env-972838 --memory=3072 --alsologtostderr -v=5 --driver=kvm2 
docker_test.go:155: (dbg) Done: out/minikube-linux-amd64 start -p force-systemd-env-972838 --memory=3072 --alsologtostderr -v=5 --driver=kvm2 : (49.254483594s)
docker_test.go:110: (dbg) Run:  out/minikube-linux-amd64 -p force-systemd-env-972838 ssh "docker info --format {{.CgroupDriver}}"
helpers_test.go:176: Cleaning up "force-systemd-env-972838" profile ...
helpers_test.go:179: (dbg) Run:  out/minikube-linux-amd64 delete -p force-systemd-env-972838
--- PASS: TestForceSystemdEnv (50.50s)

                                                
                                    
x
+
TestErrorSpam/setup (41.55s)

                                                
                                                
=== RUN   TestErrorSpam/setup
error_spam_test.go:81: (dbg) Run:  out/minikube-linux-amd64 start -p nospam-125743 -n=1 --memory=3072 --wait=false --log_dir=/tmp/nospam-125743 --driver=kvm2 
error_spam_test.go:81: (dbg) Done: out/minikube-linux-amd64 start -p nospam-125743 -n=1 --memory=3072 --wait=false --log_dir=/tmp/nospam-125743 --driver=kvm2 : (41.546923356s)
--- PASS: TestErrorSpam/setup (41.55s)

                                                
                                    
x
+
TestErrorSpam/start (0.36s)

                                                
                                                
=== RUN   TestErrorSpam/start
error_spam_test.go:206: Cleaning up 1 logfile(s) ...
error_spam_test.go:149: (dbg) Run:  out/minikube-linux-amd64 -p nospam-125743 --log_dir /tmp/nospam-125743 start --dry-run
error_spam_test.go:149: (dbg) Run:  out/minikube-linux-amd64 -p nospam-125743 --log_dir /tmp/nospam-125743 start --dry-run
error_spam_test.go:172: (dbg) Run:  out/minikube-linux-amd64 -p nospam-125743 --log_dir /tmp/nospam-125743 start --dry-run
--- PASS: TestErrorSpam/start (0.36s)

                                                
                                    
x
+
TestErrorSpam/status (0.71s)

                                                
                                                
=== RUN   TestErrorSpam/status
error_spam_test.go:206: Cleaning up 0 logfile(s) ...
error_spam_test.go:149: (dbg) Run:  out/minikube-linux-amd64 -p nospam-125743 --log_dir /tmp/nospam-125743 status
error_spam_test.go:149: (dbg) Run:  out/minikube-linux-amd64 -p nospam-125743 --log_dir /tmp/nospam-125743 status
error_spam_test.go:172: (dbg) Run:  out/minikube-linux-amd64 -p nospam-125743 --log_dir /tmp/nospam-125743 status
--- PASS: TestErrorSpam/status (0.71s)

                                                
                                    
x
+
TestErrorSpam/pause (1.36s)

                                                
                                                
=== RUN   TestErrorSpam/pause
error_spam_test.go:206: Cleaning up 0 logfile(s) ...
error_spam_test.go:149: (dbg) Run:  out/minikube-linux-amd64 -p nospam-125743 --log_dir /tmp/nospam-125743 pause
error_spam_test.go:149: (dbg) Run:  out/minikube-linux-amd64 -p nospam-125743 --log_dir /tmp/nospam-125743 pause
error_spam_test.go:172: (dbg) Run:  out/minikube-linux-amd64 -p nospam-125743 --log_dir /tmp/nospam-125743 pause
--- PASS: TestErrorSpam/pause (1.36s)

                                                
                                    
x
+
TestErrorSpam/unpause (1.67s)

                                                
                                                
=== RUN   TestErrorSpam/unpause
error_spam_test.go:206: Cleaning up 0 logfile(s) ...
error_spam_test.go:149: (dbg) Run:  out/minikube-linux-amd64 -p nospam-125743 --log_dir /tmp/nospam-125743 unpause
error_spam_test.go:149: (dbg) Run:  out/minikube-linux-amd64 -p nospam-125743 --log_dir /tmp/nospam-125743 unpause
error_spam_test.go:172: (dbg) Run:  out/minikube-linux-amd64 -p nospam-125743 --log_dir /tmp/nospam-125743 unpause
--- PASS: TestErrorSpam/unpause (1.67s)

                                                
                                    
x
+
TestErrorSpam/stop (6.6s)

                                                
                                                
=== RUN   TestErrorSpam/stop
error_spam_test.go:206: Cleaning up 0 logfile(s) ...
error_spam_test.go:149: (dbg) Run:  out/minikube-linux-amd64 -p nospam-125743 --log_dir /tmp/nospam-125743 stop
error_spam_test.go:149: (dbg) Done: out/minikube-linux-amd64 -p nospam-125743 --log_dir /tmp/nospam-125743 stop: (3.322728985s)
error_spam_test.go:149: (dbg) Run:  out/minikube-linux-amd64 -p nospam-125743 --log_dir /tmp/nospam-125743 stop
error_spam_test.go:149: (dbg) Done: out/minikube-linux-amd64 -p nospam-125743 --log_dir /tmp/nospam-125743 stop: (1.835086481s)
error_spam_test.go:172: (dbg) Run:  out/minikube-linux-amd64 -p nospam-125743 --log_dir /tmp/nospam-125743 stop
error_spam_test.go:172: (dbg) Done: out/minikube-linux-amd64 -p nospam-125743 --log_dir /tmp/nospam-125743 stop: (1.445049703s)
--- PASS: TestErrorSpam/stop (6.60s)

                                                
                                    
x
+
TestFunctional/serial/CopySyncFile (0s)

                                                
                                                
=== RUN   TestFunctional/serial/CopySyncFile
functional_test.go:1860: local sync path: /home/jenkins/minikube-integration/22128-9390/.minikube/files/etc/test/nested/copy/13307/hosts
--- PASS: TestFunctional/serial/CopySyncFile (0.00s)

                                                
                                    
x
+
TestFunctional/serial/StartWithProxy (81.2s)

                                                
                                                
=== RUN   TestFunctional/serial/StartWithProxy
functional_test.go:2239: (dbg) Run:  out/minikube-linux-amd64 start -p functional-016924 --memory=4096 --apiserver-port=8441 --wait=all --driver=kvm2 
functional_test.go:2239: (dbg) Done: out/minikube-linux-amd64 start -p functional-016924 --memory=4096 --apiserver-port=8441 --wait=all --driver=kvm2 : (1m21.201964123s)
--- PASS: TestFunctional/serial/StartWithProxy (81.20s)

                                                
                                    
x
+
TestFunctional/serial/AuditLog (0s)

                                                
                                                
=== RUN   TestFunctional/serial/AuditLog
--- PASS: TestFunctional/serial/AuditLog (0.00s)

                                                
                                    
x
+
TestFunctional/serial/SoftStart (55.58s)

                                                
                                                
=== RUN   TestFunctional/serial/SoftStart
I1213 08:37:14.115413   13307 config.go:182] Loaded profile config "functional-016924": Driver=kvm2, ContainerRuntime=docker, KubernetesVersion=v1.34.2
functional_test.go:674: (dbg) Run:  out/minikube-linux-amd64 start -p functional-016924 --alsologtostderr -v=8
E1213 08:37:37.341648   13307 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22128-9390/.minikube/profiles/addons-527167/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1213 08:37:37.348236   13307 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22128-9390/.minikube/profiles/addons-527167/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1213 08:37:37.359711   13307 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22128-9390/.minikube/profiles/addons-527167/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1213 08:37:37.381277   13307 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22128-9390/.minikube/profiles/addons-527167/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1213 08:37:37.422694   13307 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22128-9390/.minikube/profiles/addons-527167/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1213 08:37:37.504181   13307 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22128-9390/.minikube/profiles/addons-527167/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1213 08:37:37.665647   13307 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22128-9390/.minikube/profiles/addons-527167/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1213 08:37:37.987432   13307 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22128-9390/.minikube/profiles/addons-527167/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1213 08:37:38.629578   13307 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22128-9390/.minikube/profiles/addons-527167/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1213 08:37:39.911202   13307 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22128-9390/.minikube/profiles/addons-527167/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1213 08:37:42.472625   13307 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22128-9390/.minikube/profiles/addons-527167/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1213 08:37:47.594598   13307 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22128-9390/.minikube/profiles/addons-527167/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1213 08:37:57.836384   13307 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22128-9390/.minikube/profiles/addons-527167/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
functional_test.go:674: (dbg) Done: out/minikube-linux-amd64 start -p functional-016924 --alsologtostderr -v=8: (55.577197152s)
functional_test.go:678: soft start took 55.577933911s for "functional-016924" cluster.
I1213 08:38:09.693049   13307 config.go:182] Loaded profile config "functional-016924": Driver=kvm2, ContainerRuntime=docker, KubernetesVersion=v1.34.2
--- PASS: TestFunctional/serial/SoftStart (55.58s)

                                                
                                    
x
+
TestFunctional/serial/KubeContext (0.05s)

                                                
                                                
=== RUN   TestFunctional/serial/KubeContext
functional_test.go:696: (dbg) Run:  kubectl config current-context
--- PASS: TestFunctional/serial/KubeContext (0.05s)

                                                
                                    
x
+
TestFunctional/serial/KubectlGetPods (0.08s)

                                                
                                                
=== RUN   TestFunctional/serial/KubectlGetPods
functional_test.go:711: (dbg) Run:  kubectl --context functional-016924 get po -A
--- PASS: TestFunctional/serial/KubectlGetPods (0.08s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/add_remote (2.38s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/add_remote
functional_test.go:1064: (dbg) Run:  out/minikube-linux-amd64 -p functional-016924 cache add registry.k8s.io/pause:3.1
functional_test.go:1064: (dbg) Run:  out/minikube-linux-amd64 -p functional-016924 cache add registry.k8s.io/pause:3.3
functional_test.go:1064: (dbg) Run:  out/minikube-linux-amd64 -p functional-016924 cache add registry.k8s.io/pause:latest
--- PASS: TestFunctional/serial/CacheCmd/cache/add_remote (2.38s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/add_local (1.33s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/add_local
functional_test.go:1092: (dbg) Run:  docker build -t minikube-local-cache-test:functional-016924 /tmp/TestFunctionalserialCacheCmdcacheadd_local1729259149/001
functional_test.go:1104: (dbg) Run:  out/minikube-linux-amd64 -p functional-016924 cache add minikube-local-cache-test:functional-016924
functional_test.go:1109: (dbg) Run:  out/minikube-linux-amd64 -p functional-016924 cache delete minikube-local-cache-test:functional-016924
functional_test.go:1098: (dbg) Run:  docker rmi minikube-local-cache-test:functional-016924
--- PASS: TestFunctional/serial/CacheCmd/cache/add_local (1.33s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/CacheDelete (0.06s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/CacheDelete
functional_test.go:1117: (dbg) Run:  out/minikube-linux-amd64 cache delete registry.k8s.io/pause:3.3
--- PASS: TestFunctional/serial/CacheCmd/cache/CacheDelete (0.06s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/list (0.07s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/list
functional_test.go:1125: (dbg) Run:  out/minikube-linux-amd64 cache list
--- PASS: TestFunctional/serial/CacheCmd/cache/list (0.07s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/verify_cache_inside_node (0.18s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/verify_cache_inside_node
functional_test.go:1139: (dbg) Run:  out/minikube-linux-amd64 -p functional-016924 ssh sudo crictl images
--- PASS: TestFunctional/serial/CacheCmd/cache/verify_cache_inside_node (0.18s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/cache_reload (1.09s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/cache_reload
functional_test.go:1162: (dbg) Run:  out/minikube-linux-amd64 -p functional-016924 ssh sudo docker rmi registry.k8s.io/pause:latest
functional_test.go:1168: (dbg) Run:  out/minikube-linux-amd64 -p functional-016924 ssh sudo crictl inspecti registry.k8s.io/pause:latest
functional_test.go:1168: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-016924 ssh sudo crictl inspecti registry.k8s.io/pause:latest: exit status 1 (180.19372ms)

                                                
                                                
-- stdout --
	FATA[0000] no such image "registry.k8s.io/pause:latest" present 

                                                
                                                
-- /stdout --
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
functional_test.go:1173: (dbg) Run:  out/minikube-linux-amd64 -p functional-016924 cache reload
functional_test.go:1178: (dbg) Run:  out/minikube-linux-amd64 -p functional-016924 ssh sudo crictl inspecti registry.k8s.io/pause:latest
--- PASS: TestFunctional/serial/CacheCmd/cache/cache_reload (1.09s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/delete (0.12s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/delete
functional_test.go:1187: (dbg) Run:  out/minikube-linux-amd64 cache delete registry.k8s.io/pause:3.1
functional_test.go:1187: (dbg) Run:  out/minikube-linux-amd64 cache delete registry.k8s.io/pause:latest
--- PASS: TestFunctional/serial/CacheCmd/cache/delete (0.12s)

                                                
                                    
x
+
TestFunctional/serial/MinikubeKubectlCmd (0.13s)

                                                
                                                
=== RUN   TestFunctional/serial/MinikubeKubectlCmd
functional_test.go:731: (dbg) Run:  out/minikube-linux-amd64 -p functional-016924 kubectl -- --context functional-016924 get pods
--- PASS: TestFunctional/serial/MinikubeKubectlCmd (0.13s)

                                                
                                    
x
+
TestFunctional/serial/MinikubeKubectlCmdDirectly (0.12s)

                                                
                                                
=== RUN   TestFunctional/serial/MinikubeKubectlCmdDirectly
functional_test.go:756: (dbg) Run:  out/kubectl --context functional-016924 get pods
--- PASS: TestFunctional/serial/MinikubeKubectlCmdDirectly (0.12s)

                                                
                                    
x
+
TestFunctional/serial/ExtraConfig (56.92s)

                                                
                                                
=== RUN   TestFunctional/serial/ExtraConfig
functional_test.go:772: (dbg) Run:  out/minikube-linux-amd64 start -p functional-016924 --extra-config=apiserver.enable-admission-plugins=NamespaceAutoProvision --wait=all
E1213 08:38:18.318102   13307 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22128-9390/.minikube/profiles/addons-527167/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1213 08:38:59.280065   13307 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22128-9390/.minikube/profiles/addons-527167/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
functional_test.go:772: (dbg) Done: out/minikube-linux-amd64 start -p functional-016924 --extra-config=apiserver.enable-admission-plugins=NamespaceAutoProvision --wait=all: (56.917125682s)
functional_test.go:776: restart took 56.917245661s for "functional-016924" cluster.
I1213 08:39:12.224614   13307 config.go:182] Loaded profile config "functional-016924": Driver=kvm2, ContainerRuntime=docker, KubernetesVersion=v1.34.2
--- PASS: TestFunctional/serial/ExtraConfig (56.92s)

                                                
                                    
x
+
TestFunctional/serial/ComponentHealth (0.07s)

                                                
                                                
=== RUN   TestFunctional/serial/ComponentHealth
functional_test.go:825: (dbg) Run:  kubectl --context functional-016924 get po -l tier=control-plane -n kube-system -o=json
functional_test.go:840: etcd phase: Running
functional_test.go:850: etcd status: Ready
functional_test.go:840: kube-apiserver phase: Running
functional_test.go:850: kube-apiserver status: Ready
functional_test.go:840: kube-controller-manager phase: Running
functional_test.go:850: kube-controller-manager status: Ready
functional_test.go:840: kube-scheduler phase: Running
functional_test.go:850: kube-scheduler status: Ready
--- PASS: TestFunctional/serial/ComponentHealth (0.07s)

                                                
                                    
x
+
TestFunctional/serial/LogsCmd (1.07s)

                                                
                                                
=== RUN   TestFunctional/serial/LogsCmd
functional_test.go:1251: (dbg) Run:  out/minikube-linux-amd64 -p functional-016924 logs
functional_test.go:1251: (dbg) Done: out/minikube-linux-amd64 -p functional-016924 logs: (1.073243075s)
--- PASS: TestFunctional/serial/LogsCmd (1.07s)

                                                
                                    
x
+
TestFunctional/serial/LogsFileCmd (1.12s)

                                                
                                                
=== RUN   TestFunctional/serial/LogsFileCmd
functional_test.go:1265: (dbg) Run:  out/minikube-linux-amd64 -p functional-016924 logs --file /tmp/TestFunctionalserialLogsFileCmd3270904888/001/logs.txt
functional_test.go:1265: (dbg) Done: out/minikube-linux-amd64 -p functional-016924 logs --file /tmp/TestFunctionalserialLogsFileCmd3270904888/001/logs.txt: (1.114651779s)
--- PASS: TestFunctional/serial/LogsFileCmd (1.12s)

                                                
                                    
x
+
TestFunctional/serial/InvalidService (4.12s)

                                                
                                                
=== RUN   TestFunctional/serial/InvalidService
functional_test.go:2326: (dbg) Run:  kubectl --context functional-016924 apply -f testdata/invalidsvc.yaml
functional_test.go:2340: (dbg) Run:  out/minikube-linux-amd64 service invalid-svc -p functional-016924
functional_test.go:2340: (dbg) Non-zero exit: out/minikube-linux-amd64 service invalid-svc -p functional-016924: exit status 115 (242.834559ms)

                                                
                                                
-- stdout --
	┌───────────┬─────────────┬─────────────┬─────────────────────────────┐
	│ NAMESPACE │    NAME     │ TARGET PORT │             URL             │
	├───────────┼─────────────┼─────────────┼─────────────────────────────┤
	│ default   │ invalid-svc │ 80          │ http://192.168.39.217:30213 │
	└───────────┴─────────────┴─────────────┴─────────────────────────────┘
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to SVC_UNREACHABLE: service not available: no running pod for service invalid-svc found
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_service_96b204199e3191fa1740d4430b018a3c8028d52d_0.log                 │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
functional_test.go:2332: (dbg) Run:  kubectl --context functional-016924 delete -f testdata/invalidsvc.yaml
--- PASS: TestFunctional/serial/InvalidService (4.12s)

                                                
                                    
x
+
TestFunctional/parallel/ConfigCmd (0.45s)

                                                
                                                
=== RUN   TestFunctional/parallel/ConfigCmd
=== PAUSE TestFunctional/parallel/ConfigCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ConfigCmd
functional_test.go:1214: (dbg) Run:  out/minikube-linux-amd64 -p functional-016924 config unset cpus
functional_test.go:1214: (dbg) Run:  out/minikube-linux-amd64 -p functional-016924 config get cpus
functional_test.go:1214: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-016924 config get cpus: exit status 14 (68.902453ms)

                                                
                                                
** stderr ** 
	Error: specified key could not be found in config

                                                
                                                
** /stderr **
functional_test.go:1214: (dbg) Run:  out/minikube-linux-amd64 -p functional-016924 config set cpus 2
functional_test.go:1214: (dbg) Run:  out/minikube-linux-amd64 -p functional-016924 config get cpus
functional_test.go:1214: (dbg) Run:  out/minikube-linux-amd64 -p functional-016924 config unset cpus
functional_test.go:1214: (dbg) Run:  out/minikube-linux-amd64 -p functional-016924 config get cpus
functional_test.go:1214: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-016924 config get cpus: exit status 14 (61.917205ms)

                                                
                                                
** stderr ** 
	Error: specified key could not be found in config

                                                
                                                
** /stderr **
--- PASS: TestFunctional/parallel/ConfigCmd (0.45s)

                                                
                                    
x
+
TestFunctional/parallel/DashboardCmd (33.02s)

                                                
                                                
=== RUN   TestFunctional/parallel/DashboardCmd
=== PAUSE TestFunctional/parallel/DashboardCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/DashboardCmd
functional_test.go:920: (dbg) daemon: [out/minikube-linux-amd64 dashboard --url --port 36195 -p functional-016924 --alsologtostderr -v=1]
functional_test.go:925: (dbg) stopping [out/minikube-linux-amd64 dashboard --url --port 36195 -p functional-016924 --alsologtostderr -v=1] ...
helpers_test.go:526: unable to kill pid 19137: os: process already finished
--- PASS: TestFunctional/parallel/DashboardCmd (33.02s)

                                                
                                    
x
+
TestFunctional/parallel/DryRun (0.27s)

                                                
                                                
=== RUN   TestFunctional/parallel/DryRun
=== PAUSE TestFunctional/parallel/DryRun

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/DryRun
functional_test.go:989: (dbg) Run:  out/minikube-linux-amd64 start -p functional-016924 --dry-run --memory 250MB --alsologtostderr --driver=kvm2 
functional_test.go:989: (dbg) Non-zero exit: out/minikube-linux-amd64 start -p functional-016924 --dry-run --memory 250MB --alsologtostderr --driver=kvm2 : exit status 23 (136.571688ms)

                                                
                                                
-- stdout --
	* [functional-016924] minikube v1.37.0 on Ubuntu 22.04 (kvm/amd64)
	  - MINIKUBE_LOCATION=22128
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/22128-9390/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/22128-9390/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-amd64
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the kvm2 driver based on existing profile
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1213 08:39:53.712942   19412 out.go:360] Setting OutFile to fd 1 ...
	I1213 08:39:53.713289   19412 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1213 08:39:53.713306   19412 out.go:374] Setting ErrFile to fd 2...
	I1213 08:39:53.713314   19412 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1213 08:39:53.713693   19412 root.go:338] Updating PATH: /home/jenkins/minikube-integration/22128-9390/.minikube/bin
	I1213 08:39:53.714322   19412 out.go:368] Setting JSON to false
	I1213 08:39:53.715586   19412 start.go:133] hostinfo: {"hostname":"ubuntu-20-agent-3","uptime":1344,"bootTime":1765613850,"procs":179,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"22.04","kernelVersion":"6.8.0-1045-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I1213 08:39:53.715671   19412 start.go:143] virtualization: kvm guest
	I1213 08:39:53.719593   19412 out.go:179] * [functional-016924] minikube v1.37.0 on Ubuntu 22.04 (kvm/amd64)
	I1213 08:39:53.721172   19412 out.go:179]   - MINIKUBE_LOCATION=22128
	I1213 08:39:53.721177   19412 notify.go:221] Checking for updates...
	I1213 08:39:53.723634   19412 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1213 08:39:53.725100   19412 out.go:179]   - KUBECONFIG=/home/jenkins/minikube-integration/22128-9390/kubeconfig
	I1213 08:39:53.726555   19412 out.go:179]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/22128-9390/.minikube
	I1213 08:39:53.728083   19412 out.go:179]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I1213 08:39:53.729491   19412 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I1213 08:39:53.731687   19412 config.go:182] Loaded profile config "functional-016924": Driver=kvm2, ContainerRuntime=docker, KubernetesVersion=v1.34.2
	I1213 08:39:53.732416   19412 driver.go:422] Setting default libvirt URI to qemu:///system
	I1213 08:39:53.769755   19412 out.go:179] * Using the kvm2 driver based on existing profile
	I1213 08:39:53.771241   19412 start.go:309] selected driver: kvm2
	I1213 08:39:53.771265   19412 start.go:927] validating driver "kvm2" against &{Name:functional-016924 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/22101/minikube-v1.37.0-1765481609-22101-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765275396-22083@sha256:ffa93f7bad1d2c0a7acfa6e97f1eec0e4955680d08c3904e49db297a10f7f89f Memory:4096 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kuber
netesVersion:v1.34.2 ClusterName:functional-016924 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:apiserver Key:enable-admission-plugins Value:NamespaceAutoProvision}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.217 Port:8441 KubernetesVersion:v1.34.2 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpir
ation:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1213 08:39:53.771423   19412 start.go:938] status for kvm2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1213 08:39:53.773800   19412 out.go:203] 
	W1213 08:39:53.775305   19412 out.go:285] X Exiting due to RSRC_INSUFFICIENT_REQ_MEMORY: Requested memory allocation 250MiB is less than the usable minimum of 1800MB
	X Exiting due to RSRC_INSUFFICIENT_REQ_MEMORY: Requested memory allocation 250MiB is less than the usable minimum of 1800MB
	I1213 08:39:53.776676   19412 out.go:203] 

                                                
                                                
** /stderr **
functional_test.go:1006: (dbg) Run:  out/minikube-linux-amd64 start -p functional-016924 --dry-run --alsologtostderr -v=1 --driver=kvm2 
--- PASS: TestFunctional/parallel/DryRun (0.27s)

                                                
                                    
x
+
TestFunctional/parallel/InternationalLanguage (0.14s)

                                                
                                                
=== RUN   TestFunctional/parallel/InternationalLanguage
=== PAUSE TestFunctional/parallel/InternationalLanguage

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/InternationalLanguage
functional_test.go:1035: (dbg) Run:  out/minikube-linux-amd64 start -p functional-016924 --dry-run --memory 250MB --alsologtostderr --driver=kvm2 
functional_test.go:1035: (dbg) Non-zero exit: out/minikube-linux-amd64 start -p functional-016924 --dry-run --memory 250MB --alsologtostderr --driver=kvm2 : exit status 23 (137.201163ms)

                                                
                                                
-- stdout --
	* [functional-016924] minikube v1.37.0 sur Ubuntu 22.04 (kvm/amd64)
	  - MINIKUBE_LOCATION=22128
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/22128-9390/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/22128-9390/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-amd64
	  - MINIKUBE_FORCE_SYSTEMD=
	* Utilisation du pilote kvm2 basé sur le profil existant
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1213 08:39:20.759420   18701 out.go:360] Setting OutFile to fd 1 ...
	I1213 08:39:20.759532   18701 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1213 08:39:20.759538   18701 out.go:374] Setting ErrFile to fd 2...
	I1213 08:39:20.759545   18701 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1213 08:39:20.759908   18701 root.go:338] Updating PATH: /home/jenkins/minikube-integration/22128-9390/.minikube/bin
	I1213 08:39:20.760419   18701 out.go:368] Setting JSON to false
	I1213 08:39:20.761422   18701 start.go:133] hostinfo: {"hostname":"ubuntu-20-agent-3","uptime":1311,"bootTime":1765613850,"procs":175,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"22.04","kernelVersion":"6.8.0-1045-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I1213 08:39:20.761488   18701 start.go:143] virtualization: kvm guest
	I1213 08:39:20.762940   18701 out.go:179] * [functional-016924] minikube v1.37.0 sur Ubuntu 22.04 (kvm/amd64)
	I1213 08:39:20.764466   18701 out.go:179]   - MINIKUBE_LOCATION=22128
	I1213 08:39:20.764448   18701 notify.go:221] Checking for updates...
	I1213 08:39:20.765786   18701 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1213 08:39:20.767358   18701 out.go:179]   - KUBECONFIG=/home/jenkins/minikube-integration/22128-9390/kubeconfig
	I1213 08:39:20.768499   18701 out.go:179]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/22128-9390/.minikube
	I1213 08:39:20.769738   18701 out.go:179]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I1213 08:39:20.771296   18701 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I1213 08:39:20.773224   18701 config.go:182] Loaded profile config "functional-016924": Driver=kvm2, ContainerRuntime=docker, KubernetesVersion=v1.34.2
	I1213 08:39:20.774034   18701 driver.go:422] Setting default libvirt URI to qemu:///system
	I1213 08:39:20.812981   18701 out.go:179] * Utilisation du pilote kvm2 basé sur le profil existant
	I1213 08:39:20.814248   18701 start.go:309] selected driver: kvm2
	I1213 08:39:20.814264   18701 start.go:927] validating driver "kvm2" against &{Name:functional-016924 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/22101/minikube-v1.37.0-1765481609-22101-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765275396-22083@sha256:ffa93f7bad1d2c0a7acfa6e97f1eec0e4955680d08c3904e49db297a10f7f89f Memory:4096 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kuber
netesVersion:v1.34.2 ClusterName:functional-016924 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:apiserver Key:enable-admission-plugins Value:NamespaceAutoProvision}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.217 Port:8441 KubernetesVersion:v1.34.2 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0
s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1213 08:39:20.814417   18701 start.go:938] status for kvm2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1213 08:39:20.816477   18701 out.go:203] 
	W1213 08:39:20.817690   18701 out.go:285] X Fermeture en raison de RSRC_INSUFFICIENT_REQ_MEMORY : L'allocation de mémoire demandée 250 Mio est inférieure au minimum utilisable de 1800 Mo
	X Fermeture en raison de RSRC_INSUFFICIENT_REQ_MEMORY : L'allocation de mémoire demandée 250 Mio est inférieure au minimum utilisable de 1800 Mo
	I1213 08:39:20.818995   18701 out.go:203] 

                                                
                                                
** /stderr **
--- PASS: TestFunctional/parallel/InternationalLanguage (0.14s)

                                                
                                    
x
+
TestFunctional/parallel/StatusCmd (0.9s)

                                                
                                                
=== RUN   TestFunctional/parallel/StatusCmd
=== PAUSE TestFunctional/parallel/StatusCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/StatusCmd
functional_test.go:869: (dbg) Run:  out/minikube-linux-amd64 -p functional-016924 status
functional_test.go:875: (dbg) Run:  out/minikube-linux-amd64 -p functional-016924 status -f host:{{.Host}},kublet:{{.Kubelet}},apiserver:{{.APIServer}},kubeconfig:{{.Kubeconfig}}
functional_test.go:887: (dbg) Run:  out/minikube-linux-amd64 -p functional-016924 status -o json
--- PASS: TestFunctional/parallel/StatusCmd (0.90s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmdConnect (8.66s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmdConnect
=== PAUSE TestFunctional/parallel/ServiceCmdConnect

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ServiceCmdConnect
functional_test.go:1636: (dbg) Run:  kubectl --context functional-016924 create deployment hello-node-connect --image kicbase/echo-server
functional_test.go:1640: (dbg) Run:  kubectl --context functional-016924 expose deployment hello-node-connect --type=NodePort --port=8080
functional_test.go:1645: (dbg) TestFunctional/parallel/ServiceCmdConnect: waiting 10m0s for pods matching "app=hello-node-connect" in namespace "default" ...
helpers_test.go:353: "hello-node-connect-7d85dfc575-j8gpr" [2015621a-ceaa-4e8e-9d1f-06116f914606] Pending / Ready:ContainersNotReady (containers with unready status: [echo-server]) / ContainersReady:ContainersNotReady (containers with unready status: [echo-server])
2025/12/13 08:40:04 [DEBUG] GET http://127.0.0.1:36195/api/v1/namespaces/kubernetes-dashboard/services/http:kubernetes-dashboard:/proxy/
helpers_test.go:353: "hello-node-connect-7d85dfc575-j8gpr" [2015621a-ceaa-4e8e-9d1f-06116f914606] Running
functional_test.go:1645: (dbg) TestFunctional/parallel/ServiceCmdConnect: app=hello-node-connect healthy within 8.004397901s
functional_test.go:1654: (dbg) Run:  out/minikube-linux-amd64 -p functional-016924 service hello-node-connect --url
functional_test.go:1660: found endpoint for hello-node-connect: http://192.168.39.217:32227
functional_test.go:1680: http://192.168.39.217:32227: success! body:
Request served by hello-node-connect-7d85dfc575-j8gpr

                                                
                                                
HTTP/1.1 GET /

                                                
                                                
Host: 192.168.39.217:32227
Accept-Encoding: gzip
User-Agent: Go-http-client/1.1
--- PASS: TestFunctional/parallel/ServiceCmdConnect (8.66s)

                                                
                                    
x
+
TestFunctional/parallel/AddonsCmd (0.15s)

                                                
                                                
=== RUN   TestFunctional/parallel/AddonsCmd
=== PAUSE TestFunctional/parallel/AddonsCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/AddonsCmd
functional_test.go:1695: (dbg) Run:  out/minikube-linux-amd64 -p functional-016924 addons list
functional_test.go:1707: (dbg) Run:  out/minikube-linux-amd64 -p functional-016924 addons list -o json
--- PASS: TestFunctional/parallel/AddonsCmd (0.15s)

                                                
                                    
x
+
TestFunctional/parallel/PersistentVolumeClaim (48.75s)

                                                
                                                
=== RUN   TestFunctional/parallel/PersistentVolumeClaim
=== PAUSE TestFunctional/parallel/PersistentVolumeClaim

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/PersistentVolumeClaim
functional_test_pvc_test.go:50: (dbg) TestFunctional/parallel/PersistentVolumeClaim: waiting 4m0s for pods matching "integration-test=storage-provisioner" in namespace "kube-system" ...
helpers_test.go:353: "storage-provisioner" [bdc19185-e345-4bda-a742-2663f5b134b7] Running
functional_test_pvc_test.go:50: (dbg) TestFunctional/parallel/PersistentVolumeClaim: integration-test=storage-provisioner healthy within 6.005279125s
functional_test_pvc_test.go:55: (dbg) Run:  kubectl --context functional-016924 get storageclass -o=json
functional_test_pvc_test.go:75: (dbg) Run:  kubectl --context functional-016924 apply -f testdata/storage-provisioner/pvc.yaml
functional_test_pvc_test.go:82: (dbg) Run:  kubectl --context functional-016924 get pvc myclaim -o=json
I1213 08:39:26.527947   13307 retry.go:31] will retry after 2.648581151s: testpvc phase = "Pending", want "Bound" (msg={TypeMeta:{Kind:PersistentVolumeClaim APIVersion:v1} ObjectMeta:{Name:myclaim GenerateName: Namespace:default SelfLink: UID:e833d30f-f32e-414a-8ef9-5b537d94b7e8 ResourceVersion:732 Generation:0 CreationTimestamp:2025-12-13 08:39:26 +0000 UTC DeletionTimestamp:<nil> DeletionGracePeriodSeconds:<nil> Labels:map[] Annotations:map[kubectl.kubernetes.io/last-applied-configuration:{"apiVersion":"v1","kind":"PersistentVolumeClaim","metadata":{"annotations":{},"name":"myclaim","namespace":"default"},"spec":{"accessModes":["ReadWriteOnce"],"resources":{"requests":{"storage":"500Mi"}},"volumeMode":"Filesystem"}}
volume.beta.kubernetes.io/storage-provisioner:k8s.io/minikube-hostpath volume.kubernetes.io/storage-provisioner:k8s.io/minikube-hostpath] OwnerReferences:[] Finalizers:[kubernetes.io/pvc-protection] ManagedFields:[]} Spec:{AccessModes:[ReadWriteOnce] Selector:nil Resources:{Limits:map[] Requests:map[storage:{i:{value:524288000 scale:0} d:{Dec:<nil>} s:500Mi Format:BinarySI}]} VolumeName: StorageClassName:0xc0018e4be0 VolumeMode:0xc0018e4bf0 DataSource:nil DataSourceRef:nil VolumeAttributesClassName:<nil>} Status:{Phase:Pending AccessModes:[] Capacity:map[] Conditions:[] AllocatedResources:map[] AllocatedResourceStatuses:map[] CurrentVolumeAttributesClassName:<nil> ModifyVolumeStatus:nil}})
functional_test_pvc_test.go:82: (dbg) Run:  kubectl --context functional-016924 get pvc myclaim -o=json
functional_test_pvc_test.go:131: (dbg) Run:  kubectl --context functional-016924 apply -f testdata/storage-provisioner/pod.yaml
I1213 08:39:29.578742   13307 detect.go:223] nested VM detected
functional_test_pvc_test.go:140: (dbg) TestFunctional/parallel/PersistentVolumeClaim: waiting 6m0s for pods matching "test=storage-provisioner" in namespace "default" ...
helpers_test.go:353: "sp-pod" [343889a1-b1dd-431b-aa86-a6583e6fec8d] Pending
helpers_test.go:353: "sp-pod" [343889a1-b1dd-431b-aa86-a6583e6fec8d] Pending / Ready:ContainersNotReady (containers with unready status: [myfrontend]) / ContainersReady:ContainersNotReady (containers with unready status: [myfrontend])
helpers_test.go:353: "sp-pod" [343889a1-b1dd-431b-aa86-a6583e6fec8d] Running
functional_test_pvc_test.go:140: (dbg) TestFunctional/parallel/PersistentVolumeClaim: test=storage-provisioner healthy within 27.026607408s
functional_test_pvc_test.go:106: (dbg) Run:  kubectl --context functional-016924 exec sp-pod -- touch /tmp/mount/foo
functional_test_pvc_test.go:112: (dbg) Run:  kubectl --context functional-016924 delete -f testdata/storage-provisioner/pod.yaml
functional_test_pvc_test.go:112: (dbg) Done: kubectl --context functional-016924 delete -f testdata/storage-provisioner/pod.yaml: (2.450302118s)
functional_test_pvc_test.go:131: (dbg) Run:  kubectl --context functional-016924 apply -f testdata/storage-provisioner/pod.yaml
functional_test_pvc_test.go:140: (dbg) TestFunctional/parallel/PersistentVolumeClaim: waiting 6m0s for pods matching "test=storage-provisioner" in namespace "default" ...
helpers_test.go:353: "sp-pod" [4b00b1c1-0676-4f20-82c7-13d6760d129c] Pending
helpers_test.go:353: "sp-pod" [4b00b1c1-0676-4f20-82c7-13d6760d129c] Pending / Ready:ContainersNotReady (containers with unready status: [myfrontend]) / ContainersReady:ContainersNotReady (containers with unready status: [myfrontend])
helpers_test.go:353: "sp-pod" [4b00b1c1-0676-4f20-82c7-13d6760d129c] Running
functional_test_pvc_test.go:140: (dbg) TestFunctional/parallel/PersistentVolumeClaim: test=storage-provisioner healthy within 9.005921594s
functional_test_pvc_test.go:120: (dbg) Run:  kubectl --context functional-016924 exec sp-pod -- ls /tmp/mount
--- PASS: TestFunctional/parallel/PersistentVolumeClaim (48.75s)

                                                
                                    
x
+
TestFunctional/parallel/SSHCmd (0.31s)

                                                
                                                
=== RUN   TestFunctional/parallel/SSHCmd
=== PAUSE TestFunctional/parallel/SSHCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/SSHCmd
functional_test.go:1730: (dbg) Run:  out/minikube-linux-amd64 -p functional-016924 ssh "echo hello"
functional_test.go:1747: (dbg) Run:  out/minikube-linux-amd64 -p functional-016924 ssh "cat /etc/hostname"
--- PASS: TestFunctional/parallel/SSHCmd (0.31s)

                                                
                                    
x
+
TestFunctional/parallel/CpCmd (1.34s)

                                                
                                                
=== RUN   TestFunctional/parallel/CpCmd
=== PAUSE TestFunctional/parallel/CpCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/CpCmd
helpers_test.go:574: (dbg) Run:  out/minikube-linux-amd64 -p functional-016924 cp testdata/cp-test.txt /home/docker/cp-test.txt
helpers_test.go:552: (dbg) Run:  out/minikube-linux-amd64 -p functional-016924 ssh -n functional-016924 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:574: (dbg) Run:  out/minikube-linux-amd64 -p functional-016924 cp functional-016924:/home/docker/cp-test.txt /tmp/TestFunctionalparallelCpCmd726873591/001/cp-test.txt
helpers_test.go:552: (dbg) Run:  out/minikube-linux-amd64 -p functional-016924 ssh -n functional-016924 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:574: (dbg) Run:  out/minikube-linux-amd64 -p functional-016924 cp testdata/cp-test.txt /tmp/does/not/exist/cp-test.txt
helpers_test.go:552: (dbg) Run:  out/minikube-linux-amd64 -p functional-016924 ssh -n functional-016924 "sudo cat /tmp/does/not/exist/cp-test.txt"
--- PASS: TestFunctional/parallel/CpCmd (1.34s)

                                                
                                    
x
+
TestFunctional/parallel/MySQL (47.59s)

                                                
                                                
=== RUN   TestFunctional/parallel/MySQL
=== PAUSE TestFunctional/parallel/MySQL

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/MySQL
functional_test.go:1798: (dbg) Run:  kubectl --context functional-016924 replace --force -f testdata/mysql.yaml
functional_test.go:1804: (dbg) TestFunctional/parallel/MySQL: waiting 10m0s for pods matching "app=mysql" in namespace "default" ...
helpers_test.go:353: "mysql-6bcdcbc558-mvpw9" [cf30b459-17d9-4188-b35e-2ecb8e67f8df] Pending / Ready:ContainersNotReady (containers with unready status: [mysql]) / ContainersReady:ContainersNotReady (containers with unready status: [mysql])
helpers_test.go:353: "mysql-6bcdcbc558-mvpw9" [cf30b459-17d9-4188-b35e-2ecb8e67f8df] Running
functional_test.go:1804: (dbg) TestFunctional/parallel/MySQL: app=mysql healthy within 31.010834359s
functional_test.go:1812: (dbg) Run:  kubectl --context functional-016924 exec mysql-6bcdcbc558-mvpw9 -- mysql -ppassword -e "show databases;"
functional_test.go:1812: (dbg) Non-zero exit: kubectl --context functional-016924 exec mysql-6bcdcbc558-mvpw9 -- mysql -ppassword -e "show databases;": exit status 1 (237.757192ms)

                                                
                                                
** stderr ** 
	mysql: [Warning] Using a password on the command line interface can be insecure.
	ERROR 2002 (HY000): Can't connect to local MySQL server through socket '/var/run/mysqld/mysqld.sock' (2)
	command terminated with exit code 1

                                                
                                                
** /stderr **
I1213 08:39:51.410505   13307 retry.go:31] will retry after 915.160118ms: exit status 1
functional_test.go:1812: (dbg) Run:  kubectl --context functional-016924 exec mysql-6bcdcbc558-mvpw9 -- mysql -ppassword -e "show databases;"
functional_test.go:1812: (dbg) Non-zero exit: kubectl --context functional-016924 exec mysql-6bcdcbc558-mvpw9 -- mysql -ppassword -e "show databases;": exit status 1 (262.756404ms)

                                                
                                                
** stderr ** 
	mysql: [Warning] Using a password on the command line interface can be insecure.
	ERROR 2002 (HY000): Can't connect to local MySQL server through socket '/var/run/mysqld/mysqld.sock' (2)
	command terminated with exit code 1

                                                
                                                
** /stderr **
I1213 08:39:52.589640   13307 retry.go:31] will retry after 1.344549766s: exit status 1
functional_test.go:1812: (dbg) Run:  kubectl --context functional-016924 exec mysql-6bcdcbc558-mvpw9 -- mysql -ppassword -e "show databases;"
functional_test.go:1812: (dbg) Non-zero exit: kubectl --context functional-016924 exec mysql-6bcdcbc558-mvpw9 -- mysql -ppassword -e "show databases;": exit status 1 (281.618555ms)

                                                
                                                
** stderr ** 
	mysql: [Warning] Using a password on the command line interface can be insecure.
	ERROR 2002 (HY000): Can't connect to local MySQL server through socket '/var/run/mysqld/mysqld.sock' (2)
	command terminated with exit code 1

                                                
                                                
** /stderr **
I1213 08:39:54.216699   13307 retry.go:31] will retry after 2.216329765s: exit status 1
functional_test.go:1812: (dbg) Run:  kubectl --context functional-016924 exec mysql-6bcdcbc558-mvpw9 -- mysql -ppassword -e "show databases;"
functional_test.go:1812: (dbg) Non-zero exit: kubectl --context functional-016924 exec mysql-6bcdcbc558-mvpw9 -- mysql -ppassword -e "show databases;": exit status 1 (535.471995ms)

                                                
                                                
** stderr ** 
	mysql: [Warning] Using a password on the command line interface can be insecure.
	ERROR 1045 (28000): Access denied for user 'root'@'localhost' (using password: YES)
	command terminated with exit code 1

                                                
                                                
** /stderr **
I1213 08:39:56.969080   13307 retry.go:31] will retry after 4.972216828s: exit status 1
functional_test.go:1812: (dbg) Run:  kubectl --context functional-016924 exec mysql-6bcdcbc558-mvpw9 -- mysql -ppassword -e "show databases;"
functional_test.go:1812: (dbg) Non-zero exit: kubectl --context functional-016924 exec mysql-6bcdcbc558-mvpw9 -- mysql -ppassword -e "show databases;": exit status 1 (231.584113ms)

                                                
                                                
** stderr ** 
	mysql: [Warning] Using a password on the command line interface can be insecure.
	ERROR 2002 (HY000): Can't connect to local MySQL server through socket '/var/run/mysqld/mysqld.sock' (2)
	command terminated with exit code 1

                                                
                                                
** /stderr **
I1213 08:40:02.173907   13307 retry.go:31] will retry after 5.212494695s: exit status 1
functional_test.go:1812: (dbg) Run:  kubectl --context functional-016924 exec mysql-6bcdcbc558-mvpw9 -- mysql -ppassword -e "show databases;"
--- PASS: TestFunctional/parallel/MySQL (47.59s)

                                                
                                    
x
+
TestFunctional/parallel/FileSync (0.18s)

                                                
                                                
=== RUN   TestFunctional/parallel/FileSync
=== PAUSE TestFunctional/parallel/FileSync

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/FileSync
functional_test.go:1934: Checking for existence of /etc/test/nested/copy/13307/hosts within VM
functional_test.go:1936: (dbg) Run:  out/minikube-linux-amd64 -p functional-016924 ssh "sudo cat /etc/test/nested/copy/13307/hosts"
functional_test.go:1941: file sync test content: Test file for checking file sync process
--- PASS: TestFunctional/parallel/FileSync (0.18s)

                                                
                                    
x
+
TestFunctional/parallel/CertSync (1.25s)

                                                
                                                
=== RUN   TestFunctional/parallel/CertSync
=== PAUSE TestFunctional/parallel/CertSync

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/CertSync
functional_test.go:1977: Checking for existence of /etc/ssl/certs/13307.pem within VM
functional_test.go:1978: (dbg) Run:  out/minikube-linux-amd64 -p functional-016924 ssh "sudo cat /etc/ssl/certs/13307.pem"
functional_test.go:1977: Checking for existence of /usr/share/ca-certificates/13307.pem within VM
functional_test.go:1978: (dbg) Run:  out/minikube-linux-amd64 -p functional-016924 ssh "sudo cat /usr/share/ca-certificates/13307.pem"
functional_test.go:1977: Checking for existence of /etc/ssl/certs/51391683.0 within VM
functional_test.go:1978: (dbg) Run:  out/minikube-linux-amd64 -p functional-016924 ssh "sudo cat /etc/ssl/certs/51391683.0"
functional_test.go:2004: Checking for existence of /etc/ssl/certs/133072.pem within VM
functional_test.go:2005: (dbg) Run:  out/minikube-linux-amd64 -p functional-016924 ssh "sudo cat /etc/ssl/certs/133072.pem"
functional_test.go:2004: Checking for existence of /usr/share/ca-certificates/133072.pem within VM
functional_test.go:2005: (dbg) Run:  out/minikube-linux-amd64 -p functional-016924 ssh "sudo cat /usr/share/ca-certificates/133072.pem"
functional_test.go:2004: Checking for existence of /etc/ssl/certs/3ec20f2e.0 within VM
functional_test.go:2005: (dbg) Run:  out/minikube-linux-amd64 -p functional-016924 ssh "sudo cat /etc/ssl/certs/3ec20f2e.0"
--- PASS: TestFunctional/parallel/CertSync (1.25s)

                                                
                                    
x
+
TestFunctional/parallel/NodeLabels (0.1s)

                                                
                                                
=== RUN   TestFunctional/parallel/NodeLabels
=== PAUSE TestFunctional/parallel/NodeLabels

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/NodeLabels
functional_test.go:234: (dbg) Run:  kubectl --context functional-016924 get nodes --output=go-template "--template='{{range $k, $v := (index .items 0).metadata.labels}}{{$k}} {{end}}'"
--- PASS: TestFunctional/parallel/NodeLabels (0.10s)

                                                
                                    
x
+
TestFunctional/parallel/NonActiveRuntimeDisabled (0.21s)

                                                
                                                
=== RUN   TestFunctional/parallel/NonActiveRuntimeDisabled
=== PAUSE TestFunctional/parallel/NonActiveRuntimeDisabled

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/NonActiveRuntimeDisabled
functional_test.go:2032: (dbg) Run:  out/minikube-linux-amd64 -p functional-016924 ssh "sudo systemctl is-active crio"
functional_test.go:2032: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-016924 ssh "sudo systemctl is-active crio": exit status 1 (210.513992ms)

                                                
                                                
-- stdout --
	inactive

                                                
                                                
-- /stdout --
** stderr ** 
	ssh: Process exited with status 3

                                                
                                                
** /stderr **
--- PASS: TestFunctional/parallel/NonActiveRuntimeDisabled (0.21s)

                                                
                                    
x
+
TestFunctional/parallel/License (0.27s)

                                                
                                                
=== RUN   TestFunctional/parallel/License
=== PAUSE TestFunctional/parallel/License

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/License
functional_test.go:2293: (dbg) Run:  out/minikube-linux-amd64 license
--- PASS: TestFunctional/parallel/License (0.27s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/DeployApp (10.25s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/DeployApp
functional_test.go:1451: (dbg) Run:  kubectl --context functional-016924 create deployment hello-node --image kicbase/echo-server
functional_test.go:1455: (dbg) Run:  kubectl --context functional-016924 expose deployment hello-node --type=NodePort --port=8080
functional_test.go:1460: (dbg) TestFunctional/parallel/ServiceCmd/DeployApp: waiting 10m0s for pods matching "app=hello-node" in namespace "default" ...
helpers_test.go:353: "hello-node-75c85bcc94-mtwxb" [7debd516-88e0-4f38-834a-092eb11b91e1] Pending / Ready:ContainersNotReady (containers with unready status: [echo-server]) / ContainersReady:ContainersNotReady (containers with unready status: [echo-server])
helpers_test.go:353: "hello-node-75c85bcc94-mtwxb" [7debd516-88e0-4f38-834a-092eb11b91e1] Running
functional_test.go:1460: (dbg) TestFunctional/parallel/ServiceCmd/DeployApp: app=hello-node healthy within 10.005525292s
--- PASS: TestFunctional/parallel/ServiceCmd/DeployApp (10.25s)

                                                
                                    
x
+
TestFunctional/parallel/DockerEnv/bash (0.94s)

                                                
                                                
=== RUN   TestFunctional/parallel/DockerEnv/bash
functional_test.go:514: (dbg) Run:  /bin/bash -c "eval $(out/minikube-linux-amd64 -p functional-016924 docker-env) && out/minikube-linux-amd64 status -p functional-016924"
functional_test.go:537: (dbg) Run:  /bin/bash -c "eval $(out/minikube-linux-amd64 -p functional-016924 docker-env) && docker images"
--- PASS: TestFunctional/parallel/DockerEnv/bash (0.94s)

                                                
                                    
x
+
TestFunctional/parallel/UpdateContextCmd/no_changes (0.08s)

                                                
                                                
=== RUN   TestFunctional/parallel/UpdateContextCmd/no_changes
=== PAUSE TestFunctional/parallel/UpdateContextCmd/no_changes

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/UpdateContextCmd/no_changes
functional_test.go:2124: (dbg) Run:  out/minikube-linux-amd64 -p functional-016924 update-context --alsologtostderr -v=2
--- PASS: TestFunctional/parallel/UpdateContextCmd/no_changes (0.08s)

                                                
                                    
x
+
TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster (0.07s)

                                                
                                                
=== RUN   TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster
=== PAUSE TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster
functional_test.go:2124: (dbg) Run:  out/minikube-linux-amd64 -p functional-016924 update-context --alsologtostderr -v=2
--- PASS: TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster (0.07s)

                                                
                                    
x
+
TestFunctional/parallel/UpdateContextCmd/no_clusters (0.07s)

                                                
                                                
=== RUN   TestFunctional/parallel/UpdateContextCmd/no_clusters
=== PAUSE TestFunctional/parallel/UpdateContextCmd/no_clusters

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/UpdateContextCmd/no_clusters
functional_test.go:2124: (dbg) Run:  out/minikube-linux-amd64 -p functional-016924 update-context --alsologtostderr -v=2
--- PASS: TestFunctional/parallel/UpdateContextCmd/no_clusters (0.07s)

                                                
                                    
x
+
TestFunctional/parallel/Version/short (0.06s)

                                                
                                                
=== RUN   TestFunctional/parallel/Version/short
=== PAUSE TestFunctional/parallel/Version/short

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/Version/short
functional_test.go:2261: (dbg) Run:  out/minikube-linux-amd64 -p functional-016924 version --short
--- PASS: TestFunctional/parallel/Version/short (0.06s)

                                                
                                    
x
+
TestFunctional/parallel/Version/components (0.46s)

                                                
                                                
=== RUN   TestFunctional/parallel/Version/components
=== PAUSE TestFunctional/parallel/Version/components

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/Version/components
functional_test.go:2275: (dbg) Run:  out/minikube-linux-amd64 -p functional-016924 version -o=json --components
--- PASS: TestFunctional/parallel/Version/components (0.46s)

                                                
                                    
x
+
TestFunctional/parallel/ProfileCmd/profile_not_create (0.36s)

                                                
                                                
=== RUN   TestFunctional/parallel/ProfileCmd/profile_not_create
functional_test.go:1285: (dbg) Run:  out/minikube-linux-amd64 profile lis
functional_test.go:1290: (dbg) Run:  out/minikube-linux-amd64 profile list --output json
--- PASS: TestFunctional/parallel/ProfileCmd/profile_not_create (0.36s)

                                                
                                    
x
+
TestFunctional/parallel/ProfileCmd/profile_list (0.37s)

                                                
                                                
=== RUN   TestFunctional/parallel/ProfileCmd/profile_list
functional_test.go:1325: (dbg) Run:  out/minikube-linux-amd64 profile list
functional_test.go:1330: Took "301.792385ms" to run "out/minikube-linux-amd64 profile list"
functional_test.go:1339: (dbg) Run:  out/minikube-linux-amd64 profile list -l
functional_test.go:1344: Took "71.881945ms" to run "out/minikube-linux-amd64 profile list -l"
--- PASS: TestFunctional/parallel/ProfileCmd/profile_list (0.37s)

                                                
                                    
x
+
TestFunctional/parallel/ProfileCmd/profile_json_output (0.38s)

                                                
                                                
=== RUN   TestFunctional/parallel/ProfileCmd/profile_json_output
functional_test.go:1376: (dbg) Run:  out/minikube-linux-amd64 profile list -o json
functional_test.go:1381: Took "312.626512ms" to run "out/minikube-linux-amd64 profile list -o json"
functional_test.go:1389: (dbg) Run:  out/minikube-linux-amd64 profile list -o json --light
functional_test.go:1394: Took "61.935056ms" to run "out/minikube-linux-amd64 profile list -o json --light"
--- PASS: TestFunctional/parallel/ProfileCmd/profile_json_output (0.38s)

                                                
                                    
x
+
TestFunctional/parallel/MountCmd/any-port (28.96s)

                                                
                                                
=== RUN   TestFunctional/parallel/MountCmd/any-port
functional_test_mount_test.go:73: (dbg) daemon: [out/minikube-linux-amd64 mount -p functional-016924 /tmp/TestFunctionalparallelMountCmdany-port2688413469/001:/mount-9p --alsologtostderr -v=1]
functional_test_mount_test.go:107: wrote "test-1765615161933506493" to /tmp/TestFunctionalparallelMountCmdany-port2688413469/001/created-by-test
functional_test_mount_test.go:107: wrote "test-1765615161933506493" to /tmp/TestFunctionalparallelMountCmdany-port2688413469/001/created-by-test-removed-by-pod
functional_test_mount_test.go:107: wrote "test-1765615161933506493" to /tmp/TestFunctionalparallelMountCmdany-port2688413469/001/test-1765615161933506493
functional_test_mount_test.go:115: (dbg) Run:  out/minikube-linux-amd64 -p functional-016924 ssh "findmnt -T /mount-9p | grep 9p"
functional_test_mount_test.go:115: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-016924 ssh "findmnt -T /mount-9p | grep 9p": exit status 1 (173.024306ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
I1213 08:39:22.106844   13307 retry.go:31] will retry after 287.074783ms: exit status 1
functional_test_mount_test.go:115: (dbg) Run:  out/minikube-linux-amd64 -p functional-016924 ssh "findmnt -T /mount-9p | grep 9p"
functional_test_mount_test.go:129: (dbg) Run:  out/minikube-linux-amd64 -p functional-016924 ssh -- ls -la /mount-9p
functional_test_mount_test.go:133: guest mount directory contents
total 2
-rw-r--r-- 1 docker docker 24 Dec 13 08:39 created-by-test
-rw-r--r-- 1 docker docker 24 Dec 13 08:39 created-by-test-removed-by-pod
-rw-r--r-- 1 docker docker 24 Dec 13 08:39 test-1765615161933506493
functional_test_mount_test.go:137: (dbg) Run:  out/minikube-linux-amd64 -p functional-016924 ssh cat /mount-9p/test-1765615161933506493
functional_test_mount_test.go:148: (dbg) Run:  kubectl --context functional-016924 replace --force -f testdata/busybox-mount-test.yaml
functional_test_mount_test.go:153: (dbg) TestFunctional/parallel/MountCmd/any-port: waiting 4m0s for pods matching "integration-test=busybox-mount" in namespace "default" ...
helpers_test.go:353: "busybox-mount" [f996c250-8d66-4d0a-b502-3bde937b447b] Pending
helpers_test.go:353: "busybox-mount" [f996c250-8d66-4d0a-b502-3bde937b447b] Pending / Ready:ContainersNotReady (containers with unready status: [mount-munger]) / ContainersReady:ContainersNotReady (containers with unready status: [mount-munger])
helpers_test.go:353: "busybox-mount" [f996c250-8d66-4d0a-b502-3bde937b447b] Pending / Initialized:PodCompleted / Ready:PodCompleted / ContainersReady:PodCompleted
helpers_test.go:353: "busybox-mount" [f996c250-8d66-4d0a-b502-3bde937b447b] Succeeded / Initialized:PodCompleted / Ready:PodCompleted / ContainersReady:PodCompleted
functional_test_mount_test.go:153: (dbg) TestFunctional/parallel/MountCmd/any-port: integration-test=busybox-mount healthy within 27.014932848s
functional_test_mount_test.go:169: (dbg) Run:  kubectl --context functional-016924 logs busybox-mount
functional_test_mount_test.go:181: (dbg) Run:  out/minikube-linux-amd64 -p functional-016924 ssh stat /mount-9p/created-by-test
functional_test_mount_test.go:181: (dbg) Run:  out/minikube-linux-amd64 -p functional-016924 ssh stat /mount-9p/created-by-pod
functional_test_mount_test.go:90: (dbg) Run:  out/minikube-linux-amd64 -p functional-016924 ssh "sudo umount -f /mount-9p"
functional_test_mount_test.go:94: (dbg) stopping [out/minikube-linux-amd64 mount -p functional-016924 /tmp/TestFunctionalparallelMountCmdany-port2688413469/001:/mount-9p --alsologtostderr -v=1] ...
--- PASS: TestFunctional/parallel/MountCmd/any-port (28.96s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/List (0.28s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/List
functional_test.go:1469: (dbg) Run:  out/minikube-linux-amd64 -p functional-016924 service list
--- PASS: TestFunctional/parallel/ServiceCmd/List (0.28s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/JSONOutput (0.51s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/JSONOutput
functional_test.go:1499: (dbg) Run:  out/minikube-linux-amd64 -p functional-016924 service list -o json
functional_test.go:1504: Took "506.581267ms" to run "out/minikube-linux-amd64 -p functional-016924 service list -o json"
--- PASS: TestFunctional/parallel/ServiceCmd/JSONOutput (0.51s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/HTTPS (0.31s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/HTTPS
functional_test.go:1519: (dbg) Run:  out/minikube-linux-amd64 -p functional-016924 service --namespace=default --https --url hello-node
functional_test.go:1532: found endpoint: https://192.168.39.217:31633
--- PASS: TestFunctional/parallel/ServiceCmd/HTTPS (0.31s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/Format (0.42s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/Format
functional_test.go:1550: (dbg) Run:  out/minikube-linux-amd64 -p functional-016924 service hello-node --url --format={{.IP}}
--- PASS: TestFunctional/parallel/ServiceCmd/Format (0.42s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/URL (0.35s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/URL
functional_test.go:1569: (dbg) Run:  out/minikube-linux-amd64 -p functional-016924 service hello-node --url
functional_test.go:1575: found endpoint for hello-node: http://192.168.39.217:31633
--- PASS: TestFunctional/parallel/ServiceCmd/URL (0.35s)

                                                
                                    
x
+
TestFunctional/parallel/MountCmd/specific-port (1.52s)

                                                
                                                
=== RUN   TestFunctional/parallel/MountCmd/specific-port
functional_test_mount_test.go:213: (dbg) daemon: [out/minikube-linux-amd64 mount -p functional-016924 /tmp/TestFunctionalparallelMountCmdspecific-port4170219458/001:/mount-9p --alsologtostderr -v=1 --port 46464]
functional_test_mount_test.go:243: (dbg) Run:  out/minikube-linux-amd64 -p functional-016924 ssh "findmnt -T /mount-9p | grep 9p"
functional_test_mount_test.go:243: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-016924 ssh "findmnt -T /mount-9p | grep 9p": exit status 1 (225.777351ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
I1213 08:39:51.117336   13307 retry.go:31] will retry after 511.169404ms: exit status 1
functional_test_mount_test.go:243: (dbg) Run:  out/minikube-linux-amd64 -p functional-016924 ssh "findmnt -T /mount-9p | grep 9p"
functional_test_mount_test.go:257: (dbg) Run:  out/minikube-linux-amd64 -p functional-016924 ssh -- ls -la /mount-9p
functional_test_mount_test.go:261: guest mount directory contents
total 0
functional_test_mount_test.go:263: (dbg) stopping [out/minikube-linux-amd64 mount -p functional-016924 /tmp/TestFunctionalparallelMountCmdspecific-port4170219458/001:/mount-9p --alsologtostderr -v=1 --port 46464] ...
functional_test_mount_test.go:264: reading mount text
functional_test_mount_test.go:278: done reading mount text
functional_test_mount_test.go:230: (dbg) Run:  out/minikube-linux-amd64 -p functional-016924 ssh "sudo umount -f /mount-9p"
functional_test_mount_test.go:230: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-016924 ssh "sudo umount -f /mount-9p": exit status 1 (189.584789ms)

                                                
                                                
-- stdout --
	umount: /mount-9p: not mounted.

                                                
                                                
-- /stdout --
** stderr ** 
	ssh: Process exited with status 32

                                                
                                                
** /stderr **
functional_test_mount_test.go:232: "out/minikube-linux-amd64 -p functional-016924 ssh \"sudo umount -f /mount-9p\"": exit status 1
functional_test_mount_test.go:234: (dbg) stopping [out/minikube-linux-amd64 mount -p functional-016924 /tmp/TestFunctionalparallelMountCmdspecific-port4170219458/001:/mount-9p --alsologtostderr -v=1 --port 46464] ...
--- PASS: TestFunctional/parallel/MountCmd/specific-port (1.52s)

                                                
                                    
x
+
TestFunctional/parallel/MountCmd/VerifyCleanup (1.24s)

                                                
                                                
=== RUN   TestFunctional/parallel/MountCmd/VerifyCleanup
functional_test_mount_test.go:298: (dbg) daemon: [out/minikube-linux-amd64 mount -p functional-016924 /tmp/TestFunctionalparallelMountCmdVerifyCleanup1189872439/001:/mount1 --alsologtostderr -v=1]
functional_test_mount_test.go:298: (dbg) daemon: [out/minikube-linux-amd64 mount -p functional-016924 /tmp/TestFunctionalparallelMountCmdVerifyCleanup1189872439/001:/mount2 --alsologtostderr -v=1]
functional_test_mount_test.go:298: (dbg) daemon: [out/minikube-linux-amd64 mount -p functional-016924 /tmp/TestFunctionalparallelMountCmdVerifyCleanup1189872439/001:/mount3 --alsologtostderr -v=1]
functional_test_mount_test.go:325: (dbg) Run:  out/minikube-linux-amd64 -p functional-016924 ssh "findmnt -T" /mount1
functional_test_mount_test.go:325: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-016924 ssh "findmnt -T" /mount1: exit status 1 (240.222275ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
I1213 08:39:52.649333   13307 retry.go:31] will retry after 327.423705ms: exit status 1
functional_test_mount_test.go:325: (dbg) Run:  out/minikube-linux-amd64 -p functional-016924 ssh "findmnt -T" /mount1
functional_test_mount_test.go:325: (dbg) Run:  out/minikube-linux-amd64 -p functional-016924 ssh "findmnt -T" /mount2
functional_test_mount_test.go:325: (dbg) Run:  out/minikube-linux-amd64 -p functional-016924 ssh "findmnt -T" /mount3
functional_test_mount_test.go:370: (dbg) Run:  out/minikube-linux-amd64 mount -p functional-016924 --kill=true
functional_test_mount_test.go:313: (dbg) stopping [out/minikube-linux-amd64 mount -p functional-016924 /tmp/TestFunctionalparallelMountCmdVerifyCleanup1189872439/001:/mount1 --alsologtostderr -v=1] ...
helpers_test.go:508: unable to find parent, assuming dead: process does not exist
functional_test_mount_test.go:313: (dbg) stopping [out/minikube-linux-amd64 mount -p functional-016924 /tmp/TestFunctionalparallelMountCmdVerifyCleanup1189872439/001:/mount2 --alsologtostderr -v=1] ...
helpers_test.go:508: unable to find parent, assuming dead: process does not exist
functional_test_mount_test.go:313: (dbg) stopping [out/minikube-linux-amd64 mount -p functional-016924 /tmp/TestFunctionalparallelMountCmdVerifyCleanup1189872439/001:/mount3 --alsologtostderr -v=1] ...
helpers_test.go:508: unable to find parent, assuming dead: process does not exist
--- PASS: TestFunctional/parallel/MountCmd/VerifyCleanup (1.24s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageListShort (0.19s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageListShort
=== PAUSE TestFunctional/parallel/ImageCommands/ImageListShort

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageListShort
functional_test.go:276: (dbg) Run:  out/minikube-linux-amd64 -p functional-016924 image ls --format short --alsologtostderr
functional_test.go:281: (dbg) Stdout: out/minikube-linux-amd64 -p functional-016924 image ls --format short --alsologtostderr:
registry.k8s.io/pause:latest
registry.k8s.io/pause:3.3
registry.k8s.io/pause:3.10.1
registry.k8s.io/pause:3.1
registry.k8s.io/kube-scheduler:v1.34.2
registry.k8s.io/kube-proxy:v1.34.2
registry.k8s.io/kube-controller-manager:v1.34.2
registry.k8s.io/kube-apiserver:v1.34.2
registry.k8s.io/etcd:3.6.5-0
registry.k8s.io/coredns/coredns:v1.12.1
public.ecr.aws/nginx/nginx:alpine
public.ecr.aws/docker/library/mysql:8.4
gcr.io/k8s-minikube/storage-provisioner:v5
gcr.io/k8s-minikube/busybox:1.28.4-glibc
docker.io/library/minikube-local-cache-test:functional-016924
docker.io/kubernetesui/metrics-scraper:<none>
docker.io/kubernetesui/dashboard:<none>
docker.io/kicbase/echo-server:latest
docker.io/kicbase/echo-server:functional-016924
functional_test.go:284: (dbg) Stderr: out/minikube-linux-amd64 -p functional-016924 image ls --format short --alsologtostderr:
I1213 08:40:05.591775   19822 out.go:360] Setting OutFile to fd 1 ...
I1213 08:40:05.591892   19822 out.go:408] TERM=,COLORTERM=, which probably does not support color
I1213 08:40:05.591907   19822 out.go:374] Setting ErrFile to fd 2...
I1213 08:40:05.591913   19822 out.go:408] TERM=,COLORTERM=, which probably does not support color
I1213 08:40:05.592153   19822 root.go:338] Updating PATH: /home/jenkins/minikube-integration/22128-9390/.minikube/bin
I1213 08:40:05.593016   19822 config.go:182] Loaded profile config "functional-016924": Driver=kvm2, ContainerRuntime=docker, KubernetesVersion=v1.34.2
I1213 08:40:05.593172   19822 config.go:182] Loaded profile config "functional-016924": Driver=kvm2, ContainerRuntime=docker, KubernetesVersion=v1.34.2
I1213 08:40:05.595412   19822 ssh_runner.go:195] Run: systemctl --version
I1213 08:40:05.598146   19822 main.go:143] libmachine: domain functional-016924 has defined MAC address 52:54:00:05:33:93 in network mk-functional-016924
I1213 08:40:05.598750   19822 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:05:33:93", ip: ""} in network mk-functional-016924: {Iface:virbr1 ExpiryTime:2025-12-13 09:36:08 +0000 UTC Type:0 Mac:52:54:00:05:33:93 Iaid: IPaddr:192.168.39.217 Prefix:24 Hostname:functional-016924 Clientid:01:52:54:00:05:33:93}
I1213 08:40:05.598792   19822 main.go:143] libmachine: domain functional-016924 has defined IP address 192.168.39.217 and MAC address 52:54:00:05:33:93 in network mk-functional-016924
I1213 08:40:05.599025   19822 sshutil.go:53] new ssh client: &{IP:192.168.39.217 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/22128-9390/.minikube/machines/functional-016924/id_rsa Username:docker}
I1213 08:40:05.679963   19822 ssh_runner.go:195] Run: docker images --no-trunc --format "{{json .}}"
--- PASS: TestFunctional/parallel/ImageCommands/ImageListShort (0.19s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageListTable (0.19s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageListTable
=== PAUSE TestFunctional/parallel/ImageCommands/ImageListTable

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageListTable
functional_test.go:276: (dbg) Run:  out/minikube-linux-amd64 -p functional-016924 image ls --format table --alsologtostderr
functional_test.go:281: (dbg) Stdout: out/minikube-linux-amd64 -p functional-016924 image ls --format table --alsologtostderr:
┌─────────────────────────────────────────────┬───────────────────┬───────────────┬────────┐
│                    IMAGE                    │        TAG        │   IMAGE ID    │  SIZE  │
├─────────────────────────────────────────────┼───────────────────┼───────────────┼────────┤
│ public.ecr.aws/docker/library/mysql         │ 8.4               │ 20d0be4ee4524 │ 785MB  │
│ registry.k8s.io/kube-scheduler              │ v1.34.2           │ 88320b5498ff2 │ 52.8MB │
│ gcr.io/k8s-minikube/storage-provisioner     │ v5                │ 6e38f40d628db │ 31.5MB │
│ gcr.io/k8s-minikube/busybox                 │ 1.28.4-glibc      │ 56cc512116c8f │ 4.4MB  │
│ registry.k8s.io/pause                       │ latest            │ 350b164e7ae1d │ 240kB  │
│ registry.k8s.io/kube-controller-manager     │ v1.34.2           │ 01e8bacf0f500 │ 74.9MB │
│ registry.k8s.io/kube-proxy                  │ v1.34.2           │ 8aa150647e88a │ 71.9MB │
│ registry.k8s.io/coredns/coredns             │ v1.12.1           │ 52546a367cc9e │ 75MB   │
│ docker.io/kubernetesui/metrics-scraper      │ <none>            │ 115053965e86b │ 43.8MB │
│ docker.io/library/minikube-local-cache-test │ functional-016924 │ f6afca449e011 │ 30B    │
│ registry.k8s.io/kube-apiserver              │ v1.34.2           │ a5f569d49a979 │ 88MB   │
│ registry.k8s.io/etcd                        │ 3.6.5-0           │ a3e246e9556e9 │ 62.5MB │
│ registry.k8s.io/pause                       │ 3.10.1            │ cd073f4c5f6a8 │ 736kB  │
│ registry.k8s.io/pause                       │ 3.3               │ 0184c1613d929 │ 683kB  │
│ public.ecr.aws/nginx/nginx                  │ alpine            │ a236f84b9d5d2 │ 53.7MB │
│ docker.io/kubernetesui/dashboard            │ <none>            │ 07655ddf2eebe │ 246MB  │
│ docker.io/kicbase/echo-server               │ functional-016924 │ 9056ab77afb8e │ 4.94MB │
│ docker.io/kicbase/echo-server               │ latest            │ 9056ab77afb8e │ 4.94MB │
│ registry.k8s.io/pause                       │ 3.1               │ da86e6ba6ca19 │ 742kB  │
└─────────────────────────────────────────────┴───────────────────┴───────────────┴────────┘
functional_test.go:284: (dbg) Stderr: out/minikube-linux-amd64 -p functional-016924 image ls --format table --alsologtostderr:
I1213 08:40:07.790715   19887 out.go:360] Setting OutFile to fd 1 ...
I1213 08:40:07.790838   19887 out.go:408] TERM=,COLORTERM=, which probably does not support color
I1213 08:40:07.790849   19887 out.go:374] Setting ErrFile to fd 2...
I1213 08:40:07.790853   19887 out.go:408] TERM=,COLORTERM=, which probably does not support color
I1213 08:40:07.791114   19887 root.go:338] Updating PATH: /home/jenkins/minikube-integration/22128-9390/.minikube/bin
I1213 08:40:07.791707   19887 config.go:182] Loaded profile config "functional-016924": Driver=kvm2, ContainerRuntime=docker, KubernetesVersion=v1.34.2
I1213 08:40:07.791803   19887 config.go:182] Loaded profile config "functional-016924": Driver=kvm2, ContainerRuntime=docker, KubernetesVersion=v1.34.2
I1213 08:40:07.794060   19887 ssh_runner.go:195] Run: systemctl --version
I1213 08:40:07.796475   19887 main.go:143] libmachine: domain functional-016924 has defined MAC address 52:54:00:05:33:93 in network mk-functional-016924
I1213 08:40:07.797019   19887 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:05:33:93", ip: ""} in network mk-functional-016924: {Iface:virbr1 ExpiryTime:2025-12-13 09:36:08 +0000 UTC Type:0 Mac:52:54:00:05:33:93 Iaid: IPaddr:192.168.39.217 Prefix:24 Hostname:functional-016924 Clientid:01:52:54:00:05:33:93}
I1213 08:40:07.797048   19887 main.go:143] libmachine: domain functional-016924 has defined IP address 192.168.39.217 and MAC address 52:54:00:05:33:93 in network mk-functional-016924
I1213 08:40:07.797212   19887 sshutil.go:53] new ssh client: &{IP:192.168.39.217 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/22128-9390/.minikube/machines/functional-016924/id_rsa Username:docker}
I1213 08:40:07.879508   19887 ssh_runner.go:195] Run: docker images --no-trunc --format "{{json .}}"
--- PASS: TestFunctional/parallel/ImageCommands/ImageListTable (0.19s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageListJson (0.19s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageListJson
=== PAUSE TestFunctional/parallel/ImageCommands/ImageListJson

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageListJson
functional_test.go:276: (dbg) Run:  out/minikube-linux-amd64 -p functional-016924 image ls --format json --alsologtostderr
functional_test.go:281: (dbg) Stdout: out/minikube-linux-amd64 -p functional-016924 image ls --format json --alsologtostderr:
[{"id":"8aa150647e88a80f2e8c7bd5beb3b7af1209fb4004a261e86b617f40849c6d45","repoDigests":[],"repoTags":["registry.k8s.io/kube-proxy:v1.34.2"],"size":"71900000"},{"id":"cd073f4c5f6a8e9dc6f3125ba00cf60819cae95c1ec84a1f146ee4a9cf9e803f","repoDigests":[],"repoTags":["registry.k8s.io/pause:3.10.1"],"size":"736000"},{"id":"52546a367cc9e0d924aa3b190596a9167fa6e53245023b5b5baf0f07e5443969","repoDigests":[],"repoTags":["registry.k8s.io/coredns/coredns:v1.12.1"],"size":"75000000"},{"id":"01e8bacf0f50095b9b12daf485979dbcb454e08c405e42bde98e3d2198e475e8","repoDigests":[],"repoTags":["registry.k8s.io/kube-controller-manager:v1.34.2"],"size":"74900000"},{"id":"0184c1613d92931126feb4c548e5da11015513b9e4c104e7305ee8b53b50a9da","repoDigests":[],"repoTags":["registry.k8s.io/pause:3.3"],"size":"683000"},{"id":"56cc512116c8f894f11ce1995460aef1ee0972d48bc2a3bdb1faaac7c020289c","repoDigests":[],"repoTags":["gcr.io/k8s-minikube/busybox:1.28.4-glibc"],"size":"4400000"},{"id":"115053965e86b2df4d78af78d7951b8644839d20a03820c6df
59a261103315f7","repoDigests":[],"repoTags":["docker.io/kubernetesui/metrics-scraper:\u003cnone\u003e"],"size":"43800000"},{"id":"f6afca449e011ac4564696c2c2f9d78931c875dc1482518037b86a6444783193","repoDigests":[],"repoTags":["docker.io/library/minikube-local-cache-test:functional-016924"],"size":"30"},{"id":"a236f84b9d5d27fe4bf2bab07501cccdc8e16bb38a41f83e245216bbd2b61b5c","repoDigests":[],"repoTags":["public.ecr.aws/nginx/nginx:alpine"],"size":"53700000"},{"id":"20d0be4ee45242864913b12e7dc544f29f94117c9846c6a6b73d416670d42438","repoDigests":[],"repoTags":["public.ecr.aws/docker/library/mysql:8.4"],"size":"785000000"},{"id":"88320b5498ff2caef2e5b089fc2c49c81d6529dcbba1481eb04badc3e40e5952","repoDigests":[],"repoTags":["registry.k8s.io/kube-scheduler:v1.34.2"],"size":"52800000"},{"id":"a3e246e9556e93d71e2850085ba581b376c76a9187b4b8a01c120f86579ef2b1","repoDigests":[],"repoTags":["registry.k8s.io/etcd:3.6.5-0"],"size":"62500000"},{"id":"350b164e7ae1dcddeffadd65c76226c9b6dc5553f5179153fb0e36b78f2a5e06","repoDige
sts":[],"repoTags":["registry.k8s.io/pause:latest"],"size":"240000"},{"id":"a5f569d49a979d9f62c742edf7a6b6ee8b3cf5855e05dacb0647445bb62ffb85","repoDigests":[],"repoTags":["registry.k8s.io/kube-apiserver:v1.34.2"],"size":"88000000"},{"id":"07655ddf2eebe5d250f7a72c25f638b27126805d61779741b4e62e69ba080558","repoDigests":[],"repoTags":["docker.io/kubernetesui/dashboard:\u003cnone\u003e"],"size":"246000000"},{"id":"9056ab77afb8e18e04303f11000a9d31b3f16b74c59475b899ae1b342d328d30","repoDigests":[],"repoTags":["docker.io/kicbase/echo-server:functional-016924","docker.io/kicbase/echo-server:latest"],"size":"4940000"},{"id":"6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562","repoDigests":[],"repoTags":["gcr.io/k8s-minikube/storage-provisioner:v5"],"size":"31500000"},{"id":"da86e6ba6ca197bf6bc5e9d900febd906b133eaa4750e6bed647b0fbe50ed43e","repoDigests":[],"repoTags":["registry.k8s.io/pause:3.1"],"size":"742000"}]
functional_test.go:284: (dbg) Stderr: out/minikube-linux-amd64 -p functional-016924 image ls --format json --alsologtostderr:
I1213 08:40:07.601207   19876 out.go:360] Setting OutFile to fd 1 ...
I1213 08:40:07.601339   19876 out.go:408] TERM=,COLORTERM=, which probably does not support color
I1213 08:40:07.601370   19876 out.go:374] Setting ErrFile to fd 2...
I1213 08:40:07.601375   19876 out.go:408] TERM=,COLORTERM=, which probably does not support color
I1213 08:40:07.601587   19876 root.go:338] Updating PATH: /home/jenkins/minikube-integration/22128-9390/.minikube/bin
I1213 08:40:07.602118   19876 config.go:182] Loaded profile config "functional-016924": Driver=kvm2, ContainerRuntime=docker, KubernetesVersion=v1.34.2
I1213 08:40:07.602216   19876 config.go:182] Loaded profile config "functional-016924": Driver=kvm2, ContainerRuntime=docker, KubernetesVersion=v1.34.2
I1213 08:40:07.604753   19876 ssh_runner.go:195] Run: systemctl --version
I1213 08:40:07.607567   19876 main.go:143] libmachine: domain functional-016924 has defined MAC address 52:54:00:05:33:93 in network mk-functional-016924
I1213 08:40:07.608114   19876 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:05:33:93", ip: ""} in network mk-functional-016924: {Iface:virbr1 ExpiryTime:2025-12-13 09:36:08 +0000 UTC Type:0 Mac:52:54:00:05:33:93 Iaid: IPaddr:192.168.39.217 Prefix:24 Hostname:functional-016924 Clientid:01:52:54:00:05:33:93}
I1213 08:40:07.608143   19876 main.go:143] libmachine: domain functional-016924 has defined IP address 192.168.39.217 and MAC address 52:54:00:05:33:93 in network mk-functional-016924
I1213 08:40:07.608303   19876 sshutil.go:53] new ssh client: &{IP:192.168.39.217 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/22128-9390/.minikube/machines/functional-016924/id_rsa Username:docker}
I1213 08:40:07.689695   19876 ssh_runner.go:195] Run: docker images --no-trunc --format "{{json .}}"
--- PASS: TestFunctional/parallel/ImageCommands/ImageListJson (0.19s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageListYaml (0.19s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageListYaml
=== PAUSE TestFunctional/parallel/ImageCommands/ImageListYaml

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageListYaml
functional_test.go:276: (dbg) Run:  out/minikube-linux-amd64 -p functional-016924 image ls --format yaml --alsologtostderr
functional_test.go:281: (dbg) Stdout: out/minikube-linux-amd64 -p functional-016924 image ls --format yaml --alsologtostderr:
- id: a236f84b9d5d27fe4bf2bab07501cccdc8e16bb38a41f83e245216bbd2b61b5c
repoDigests: []
repoTags:
- public.ecr.aws/nginx/nginx:alpine
size: "53700000"
- id: 20d0be4ee45242864913b12e7dc544f29f94117c9846c6a6b73d416670d42438
repoDigests: []
repoTags:
- public.ecr.aws/docker/library/mysql:8.4
size: "785000000"
- id: 07655ddf2eebe5d250f7a72c25f638b27126805d61779741b4e62e69ba080558
repoDigests: []
repoTags:
- docker.io/kubernetesui/dashboard:<none>
size: "246000000"
- id: 9056ab77afb8e18e04303f11000a9d31b3f16b74c59475b899ae1b342d328d30
repoDigests: []
repoTags:
- docker.io/kicbase/echo-server:functional-016924
- docker.io/kicbase/echo-server:latest
size: "4940000"
- id: da86e6ba6ca197bf6bc5e9d900febd906b133eaa4750e6bed647b0fbe50ed43e
repoDigests: []
repoTags:
- registry.k8s.io/pause:3.1
size: "742000"
- id: 8aa150647e88a80f2e8c7bd5beb3b7af1209fb4004a261e86b617f40849c6d45
repoDigests: []
repoTags:
- registry.k8s.io/kube-proxy:v1.34.2
size: "71900000"
- id: 01e8bacf0f50095b9b12daf485979dbcb454e08c405e42bde98e3d2198e475e8
repoDigests: []
repoTags:
- registry.k8s.io/kube-controller-manager:v1.34.2
size: "74900000"
- id: cd073f4c5f6a8e9dc6f3125ba00cf60819cae95c1ec84a1f146ee4a9cf9e803f
repoDigests: []
repoTags:
- registry.k8s.io/pause:3.10.1
size: "736000"
- id: 6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562
repoDigests: []
repoTags:
- gcr.io/k8s-minikube/storage-provisioner:v5
size: "31500000"
- id: 350b164e7ae1dcddeffadd65c76226c9b6dc5553f5179153fb0e36b78f2a5e06
repoDigests: []
repoTags:
- registry.k8s.io/pause:latest
size: "240000"
- id: a5f569d49a979d9f62c742edf7a6b6ee8b3cf5855e05dacb0647445bb62ffb85
repoDigests: []
repoTags:
- registry.k8s.io/kube-apiserver:v1.34.2
size: "88000000"
- id: 52546a367cc9e0d924aa3b190596a9167fa6e53245023b5b5baf0f07e5443969
repoDigests: []
repoTags:
- registry.k8s.io/coredns/coredns:v1.12.1
size: "75000000"
- id: 115053965e86b2df4d78af78d7951b8644839d20a03820c6df59a261103315f7
repoDigests: []
repoTags:
- docker.io/kubernetesui/metrics-scraper:<none>
size: "43800000"
- id: f6afca449e011ac4564696c2c2f9d78931c875dc1482518037b86a6444783193
repoDigests: []
repoTags:
- docker.io/library/minikube-local-cache-test:functional-016924
size: "30"
- id: 88320b5498ff2caef2e5b089fc2c49c81d6529dcbba1481eb04badc3e40e5952
repoDigests: []
repoTags:
- registry.k8s.io/kube-scheduler:v1.34.2
size: "52800000"
- id: a3e246e9556e93d71e2850085ba581b376c76a9187b4b8a01c120f86579ef2b1
repoDigests: []
repoTags:
- registry.k8s.io/etcd:3.6.5-0
size: "62500000"
- id: 0184c1613d92931126feb4c548e5da11015513b9e4c104e7305ee8b53b50a9da
repoDigests: []
repoTags:
- registry.k8s.io/pause:3.3
size: "683000"
- id: 56cc512116c8f894f11ce1995460aef1ee0972d48bc2a3bdb1faaac7c020289c
repoDigests: []
repoTags:
- gcr.io/k8s-minikube/busybox:1.28.4-glibc
size: "4400000"

                                                
                                                
functional_test.go:284: (dbg) Stderr: out/minikube-linux-amd64 -p functional-016924 image ls --format yaml --alsologtostderr:
I1213 08:40:05.780678   19833 out.go:360] Setting OutFile to fd 1 ...
I1213 08:40:05.780778   19833 out.go:408] TERM=,COLORTERM=, which probably does not support color
I1213 08:40:05.780789   19833 out.go:374] Setting ErrFile to fd 2...
I1213 08:40:05.780795   19833 out.go:408] TERM=,COLORTERM=, which probably does not support color
I1213 08:40:05.781029   19833 root.go:338] Updating PATH: /home/jenkins/minikube-integration/22128-9390/.minikube/bin
I1213 08:40:05.782382   19833 config.go:182] Loaded profile config "functional-016924": Driver=kvm2, ContainerRuntime=docker, KubernetesVersion=v1.34.2
I1213 08:40:05.782704   19833 config.go:182] Loaded profile config "functional-016924": Driver=kvm2, ContainerRuntime=docker, KubernetesVersion=v1.34.2
I1213 08:40:05.784961   19833 ssh_runner.go:195] Run: systemctl --version
I1213 08:40:05.787486   19833 main.go:143] libmachine: domain functional-016924 has defined MAC address 52:54:00:05:33:93 in network mk-functional-016924
I1213 08:40:05.787934   19833 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:05:33:93", ip: ""} in network mk-functional-016924: {Iface:virbr1 ExpiryTime:2025-12-13 09:36:08 +0000 UTC Type:0 Mac:52:54:00:05:33:93 Iaid: IPaddr:192.168.39.217 Prefix:24 Hostname:functional-016924 Clientid:01:52:54:00:05:33:93}
I1213 08:40:05.787964   19833 main.go:143] libmachine: domain functional-016924 has defined IP address 192.168.39.217 and MAC address 52:54:00:05:33:93 in network mk-functional-016924
I1213 08:40:05.788109   19833 sshutil.go:53] new ssh client: &{IP:192.168.39.217 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/22128-9390/.minikube/machines/functional-016924/id_rsa Username:docker}
I1213 08:40:05.879366   19833 ssh_runner.go:195] Run: docker images --no-trunc --format "{{json .}}"
--- PASS: TestFunctional/parallel/ImageCommands/ImageListYaml (0.19s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageBuild (3.46s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageBuild
=== PAUSE TestFunctional/parallel/ImageCommands/ImageBuild

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageBuild
functional_test.go:323: (dbg) Run:  out/minikube-linux-amd64 -p functional-016924 ssh pgrep buildkitd
functional_test.go:323: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-016924 ssh pgrep buildkitd: exit status 1 (158.605382ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
functional_test.go:330: (dbg) Run:  out/minikube-linux-amd64 -p functional-016924 image build -t localhost/my-image:functional-016924 testdata/build --alsologtostderr
functional_test.go:330: (dbg) Done: out/minikube-linux-amd64 -p functional-016924 image build -t localhost/my-image:functional-016924 testdata/build --alsologtostderr: (3.124948916s)
functional_test.go:338: (dbg) Stderr: out/minikube-linux-amd64 -p functional-016924 image build -t localhost/my-image:functional-016924 testdata/build --alsologtostderr:
I1213 08:40:06.128539   19855 out.go:360] Setting OutFile to fd 1 ...
I1213 08:40:06.128688   19855 out.go:408] TERM=,COLORTERM=, which probably does not support color
I1213 08:40:06.128697   19855 out.go:374] Setting ErrFile to fd 2...
I1213 08:40:06.128701   19855 out.go:408] TERM=,COLORTERM=, which probably does not support color
I1213 08:40:06.128892   19855 root.go:338] Updating PATH: /home/jenkins/minikube-integration/22128-9390/.minikube/bin
I1213 08:40:06.129461   19855 config.go:182] Loaded profile config "functional-016924": Driver=kvm2, ContainerRuntime=docker, KubernetesVersion=v1.34.2
I1213 08:40:06.130121   19855 config.go:182] Loaded profile config "functional-016924": Driver=kvm2, ContainerRuntime=docker, KubernetesVersion=v1.34.2
I1213 08:40:06.132430   19855 ssh_runner.go:195] Run: systemctl --version
I1213 08:40:06.134677   19855 main.go:143] libmachine: domain functional-016924 has defined MAC address 52:54:00:05:33:93 in network mk-functional-016924
I1213 08:40:06.135093   19855 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:05:33:93", ip: ""} in network mk-functional-016924: {Iface:virbr1 ExpiryTime:2025-12-13 09:36:08 +0000 UTC Type:0 Mac:52:54:00:05:33:93 Iaid: IPaddr:192.168.39.217 Prefix:24 Hostname:functional-016924 Clientid:01:52:54:00:05:33:93}
I1213 08:40:06.135123   19855 main.go:143] libmachine: domain functional-016924 has defined IP address 192.168.39.217 and MAC address 52:54:00:05:33:93 in network mk-functional-016924
I1213 08:40:06.135249   19855 sshutil.go:53] new ssh client: &{IP:192.168.39.217 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/22128-9390/.minikube/machines/functional-016924/id_rsa Username:docker}
I1213 08:40:06.212793   19855 build_images.go:162] Building image from path: /tmp/build.2107327708.tar
I1213 08:40:06.212854   19855 ssh_runner.go:195] Run: sudo mkdir -p /var/lib/minikube/build
I1213 08:40:06.227934   19855 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/build/build.2107327708.tar
I1213 08:40:06.234508   19855 ssh_runner.go:352] existence check for /var/lib/minikube/build/build.2107327708.tar: stat -c "%s %y" /var/lib/minikube/build/build.2107327708.tar: Process exited with status 1
stdout:

                                                
                                                
stderr:
stat: cannot statx '/var/lib/minikube/build/build.2107327708.tar': No such file or directory
I1213 08:40:06.234554   19855 ssh_runner.go:362] scp /tmp/build.2107327708.tar --> /var/lib/minikube/build/build.2107327708.tar (3072 bytes)
I1213 08:40:06.272204   19855 ssh_runner.go:195] Run: sudo mkdir -p /var/lib/minikube/build/build.2107327708
I1213 08:40:06.287119   19855 ssh_runner.go:195] Run: sudo tar -C /var/lib/minikube/build/build.2107327708 -xf /var/lib/minikube/build/build.2107327708.tar
I1213 08:40:06.300564   19855 docker.go:361] Building image: /var/lib/minikube/build/build.2107327708
I1213 08:40:06.300650   19855 ssh_runner.go:195] Run: docker build -t localhost/my-image:functional-016924 /var/lib/minikube/build/build.2107327708
#0 building with "default" instance using docker driver

                                                
                                                
#1 [internal] load build definition from Dockerfile
#1 transferring dockerfile: 97B done
#1 DONE 0.0s

                                                
                                                
#2 [internal] load metadata for gcr.io/k8s-minikube/busybox:latest
#2 DONE 1.6s

                                                
                                                
#3 [internal] load .dockerignore
#3 transferring context: 2B done
#3 DONE 0.0s

                                                
                                                
#4 [internal] load build context
#4 transferring context: 62B done
#4 DONE 0.0s

                                                
                                                
#5 [1/3] FROM gcr.io/k8s-minikube/busybox:latest@sha256:ca5ae90100d50772da31f3b5016209e25ad61972404e2ccd83d44f10dee7e79b
#5 resolve gcr.io/k8s-minikube/busybox:latest@sha256:ca5ae90100d50772da31f3b5016209e25ad61972404e2ccd83d44f10dee7e79b 0.0s done
#5 sha256:5cc84ad355aaa64f46ea9c7bbcc319a9d808ab15088a27209c9e70ef86e5a2aa 0B / 772.79kB 0.1s
#5 sha256:ca5ae90100d50772da31f3b5016209e25ad61972404e2ccd83d44f10dee7e79b 770B / 770B done
#5 sha256:62ffc2ed7554e4c6d360bce40bbcf196573dd27c4ce080641a2c59867e732dee 527B / 527B done
#5 sha256:beae173ccac6ad749f76713cf4440fe3d21d1043fe616dfbe30775815d1d0f6a 1.46kB / 1.46kB done
#5 sha256:5cc84ad355aaa64f46ea9c7bbcc319a9d808ab15088a27209c9e70ef86e5a2aa 772.79kB / 772.79kB 0.3s done
#5 extracting sha256:5cc84ad355aaa64f46ea9c7bbcc319a9d808ab15088a27209c9e70ef86e5a2aa 0.1s done
#5 DONE 0.5s

                                                
                                                
#6 [2/3] RUN true
#6 DONE 0.2s

                                                
                                                
#7 [3/3] ADD content.txt /
#7 DONE 0.1s

                                                
                                                
#8 exporting to image
#8 exporting layers
#8 exporting layers 0.1s done
#8 writing image sha256:60a5f47f44032302788620aaa019981467ae5984466aa821c026fc938c8a5f65 done
#8 naming to localhost/my-image:functional-016924 done
#8 DONE 0.1s
I1213 08:40:09.151794   19855 ssh_runner.go:235] Completed: docker build -t localhost/my-image:functional-016924 /var/lib/minikube/build/build.2107327708: (2.851111155s)
I1213 08:40:09.151895   19855 ssh_runner.go:195] Run: sudo rm -rf /var/lib/minikube/build/build.2107327708
I1213 08:40:09.174801   19855 ssh_runner.go:195] Run: sudo rm -f /var/lib/minikube/build/build.2107327708.tar
I1213 08:40:09.188914   19855 build_images.go:218] Built localhost/my-image:functional-016924 from /tmp/build.2107327708.tar
I1213 08:40:09.188968   19855 build_images.go:134] succeeded building to: functional-016924
I1213 08:40:09.188974   19855 build_images.go:135] failed building to: 
functional_test.go:466: (dbg) Run:  out/minikube-linux-amd64 -p functional-016924 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageBuild (3.46s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/Setup (1.62s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/Setup
functional_test.go:357: (dbg) Run:  docker pull kicbase/echo-server:1.0
functional_test.go:357: (dbg) Done: docker pull kicbase/echo-server:1.0: (1.592277566s)
functional_test.go:362: (dbg) Run:  docker tag kicbase/echo-server:1.0 kicbase/echo-server:functional-016924
--- PASS: TestFunctional/parallel/ImageCommands/Setup (1.62s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageLoadDaemon (1.42s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageLoadDaemon
functional_test.go:370: (dbg) Run:  out/minikube-linux-amd64 -p functional-016924 image load --daemon kicbase/echo-server:functional-016924 --alsologtostderr
functional_test.go:370: (dbg) Done: out/minikube-linux-amd64 -p functional-016924 image load --daemon kicbase/echo-server:functional-016924 --alsologtostderr: (1.096648007s)
functional_test.go:466: (dbg) Run:  out/minikube-linux-amd64 -p functional-016924 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageLoadDaemon (1.42s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageReloadDaemon (1.08s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageReloadDaemon
functional_test.go:380: (dbg) Run:  out/minikube-linux-amd64 -p functional-016924 image load --daemon kicbase/echo-server:functional-016924 --alsologtostderr
functional_test.go:466: (dbg) Run:  out/minikube-linux-amd64 -p functional-016924 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageReloadDaemon (1.08s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageTagAndLoadDaemon (1.63s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageTagAndLoadDaemon
functional_test.go:250: (dbg) Run:  docker pull kicbase/echo-server:latest
functional_test.go:255: (dbg) Run:  docker tag kicbase/echo-server:latest kicbase/echo-server:functional-016924
functional_test.go:260: (dbg) Run:  out/minikube-linux-amd64 -p functional-016924 image load --daemon kicbase/echo-server:functional-016924 --alsologtostderr
functional_test.go:466: (dbg) Run:  out/minikube-linux-amd64 -p functional-016924 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageTagAndLoadDaemon (1.63s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageSaveToFile (0.71s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageSaveToFile
functional_test.go:395: (dbg) Run:  out/minikube-linux-amd64 -p functional-016924 image save kicbase/echo-server:functional-016924 /home/jenkins/workspace/KVM_Linux_integration/echo-server-save.tar --alsologtostderr
I1213 08:39:59.731060   13307 detect.go:223] nested VM detected
--- PASS: TestFunctional/parallel/ImageCommands/ImageSaveToFile (0.71s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageRemove (0.53s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageRemove
functional_test.go:407: (dbg) Run:  out/minikube-linux-amd64 -p functional-016924 image rm kicbase/echo-server:functional-016924 --alsologtostderr
functional_test.go:466: (dbg) Run:  out/minikube-linux-amd64 -p functional-016924 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageRemove (0.53s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageLoadFromFile (1.1s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageLoadFromFile
functional_test.go:424: (dbg) Run:  out/minikube-linux-amd64 -p functional-016924 image load /home/jenkins/workspace/KVM_Linux_integration/echo-server-save.tar --alsologtostderr
functional_test.go:466: (dbg) Run:  out/minikube-linux-amd64 -p functional-016924 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageLoadFromFile (1.10s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageSaveDaemon (0.52s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageSaveDaemon
functional_test.go:434: (dbg) Run:  docker rmi kicbase/echo-server:functional-016924
functional_test.go:439: (dbg) Run:  out/minikube-linux-amd64 -p functional-016924 image save --daemon kicbase/echo-server:functional-016924 --alsologtostderr
functional_test.go:447: (dbg) Run:  docker image inspect kicbase/echo-server:functional-016924
--- PASS: TestFunctional/parallel/ImageCommands/ImageSaveDaemon (0.52s)

                                                
                                    
x
+
TestFunctional/delete_echo-server_images (0.04s)

                                                
                                                
=== RUN   TestFunctional/delete_echo-server_images
functional_test.go:205: (dbg) Run:  docker rmi -f kicbase/echo-server:1.0
functional_test.go:205: (dbg) Run:  docker rmi -f kicbase/echo-server:functional-016924
--- PASS: TestFunctional/delete_echo-server_images (0.04s)

                                                
                                    
x
+
TestFunctional/delete_my-image_image (0.02s)

                                                
                                                
=== RUN   TestFunctional/delete_my-image_image
functional_test.go:213: (dbg) Run:  docker rmi -f localhost/my-image:functional-016924
--- PASS: TestFunctional/delete_my-image_image (0.02s)

                                                
                                    
x
+
TestFunctional/delete_minikube_cached_images (0.02s)

                                                
                                                
=== RUN   TestFunctional/delete_minikube_cached_images
functional_test.go:221: (dbg) Run:  docker rmi -f minikube-local-cache-test:functional-016924
--- PASS: TestFunctional/delete_minikube_cached_images (0.02s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/CopySyncFile (0s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/CopySyncFile
functional_test.go:1860: local sync path: /home/jenkins/minikube-integration/22128-9390/.minikube/files/etc/test/nested/copy/13307/hosts
--- PASS: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/CopySyncFile (0.00s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/StartWithProxy (86.63s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/StartWithProxy
functional_test.go:2239: (dbg) Run:  out/minikube-linux-amd64 start -p functional-888658 --memory=4096 --apiserver-port=8441 --wait=all --driver=kvm2  --kubernetes-version=v1.35.0-beta.0
E1213 08:40:21.203229   13307 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22128-9390/.minikube/profiles/addons-527167/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
functional_test.go:2239: (dbg) Done: out/minikube-linux-amd64 start -p functional-888658 --memory=4096 --apiserver-port=8441 --wait=all --driver=kvm2  --kubernetes-version=v1.35.0-beta.0: (1m26.628427556s)
--- PASS: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/StartWithProxy (86.63s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/AuditLog (0s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/AuditLog
--- PASS: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/AuditLog (0.00s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/SoftStart (57.95s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/SoftStart
I1213 08:41:38.958339   13307 config.go:182] Loaded profile config "functional-888658": Driver=kvm2, ContainerRuntime=docker, KubernetesVersion=v1.35.0-beta.0
functional_test.go:674: (dbg) Run:  out/minikube-linux-amd64 start -p functional-888658 --alsologtostderr -v=8
functional_test.go:674: (dbg) Done: out/minikube-linux-amd64 start -p functional-888658 --alsologtostderr -v=8: (57.944744874s)
functional_test.go:678: soft start took 57.94515404s for "functional-888658" cluster.
I1213 08:42:36.903477   13307 config.go:182] Loaded profile config "functional-888658": Driver=kvm2, ContainerRuntime=docker, KubernetesVersion=v1.35.0-beta.0
--- PASS: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/SoftStart (57.95s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/KubeContext (0.05s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/KubeContext
functional_test.go:696: (dbg) Run:  kubectl config current-context
--- PASS: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/KubeContext (0.05s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/KubectlGetPods (0.09s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/KubectlGetPods
functional_test.go:711: (dbg) Run:  kubectl --context functional-888658 get po -A
--- PASS: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/KubectlGetPods (0.09s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/CacheCmd/cache/add_remote (2.22s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/CacheCmd/cache/add_remote
functional_test.go:1064: (dbg) Run:  out/minikube-linux-amd64 -p functional-888658 cache add registry.k8s.io/pause:3.1
E1213 08:42:37.332713   13307 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22128-9390/.minikube/profiles/addons-527167/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
functional_test.go:1064: (dbg) Run:  out/minikube-linux-amd64 -p functional-888658 cache add registry.k8s.io/pause:3.3
functional_test.go:1064: (dbg) Run:  out/minikube-linux-amd64 -p functional-888658 cache add registry.k8s.io/pause:latest
--- PASS: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/CacheCmd/cache/add_remote (2.22s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/CacheCmd/cache/add_local (1.29s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/CacheCmd/cache/add_local
functional_test.go:1092: (dbg) Run:  docker build -t minikube-local-cache-test:functional-888658 /tmp/TestFunctionalNewestKubernetesVersionv1.35.0-beta.0serialCach1738398225/001
functional_test.go:1104: (dbg) Run:  out/minikube-linux-amd64 -p functional-888658 cache add minikube-local-cache-test:functional-888658
functional_test.go:1109: (dbg) Run:  out/minikube-linux-amd64 -p functional-888658 cache delete minikube-local-cache-test:functional-888658
functional_test.go:1098: (dbg) Run:  docker rmi minikube-local-cache-test:functional-888658
--- PASS: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/CacheCmd/cache/add_local (1.29s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/CacheCmd/cache/CacheDelete (0.06s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/CacheCmd/cache/CacheDelete
functional_test.go:1117: (dbg) Run:  out/minikube-linux-amd64 cache delete registry.k8s.io/pause:3.3
--- PASS: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/CacheCmd/cache/CacheDelete (0.06s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/CacheCmd/cache/list (0.06s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/CacheCmd/cache/list
functional_test.go:1125: (dbg) Run:  out/minikube-linux-amd64 cache list
--- PASS: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/CacheCmd/cache/list (0.06s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/CacheCmd/cache/verify_cache_inside_node (0.18s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/CacheCmd/cache/verify_cache_inside_node
functional_test.go:1139: (dbg) Run:  out/minikube-linux-amd64 -p functional-888658 ssh sudo crictl images
--- PASS: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/CacheCmd/cache/verify_cache_inside_node (0.18s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/CacheCmd/cache/cache_reload (1.08s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/CacheCmd/cache/cache_reload
functional_test.go:1162: (dbg) Run:  out/minikube-linux-amd64 -p functional-888658 ssh sudo docker rmi registry.k8s.io/pause:latest
functional_test.go:1168: (dbg) Run:  out/minikube-linux-amd64 -p functional-888658 ssh sudo crictl inspecti registry.k8s.io/pause:latest
functional_test.go:1168: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-888658 ssh sudo crictl inspecti registry.k8s.io/pause:latest: exit status 1 (182.614932ms)

                                                
                                                
-- stdout --
	FATA[0000] no such image "registry.k8s.io/pause:latest" present 

                                                
                                                
-- /stdout --
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
functional_test.go:1173: (dbg) Run:  out/minikube-linux-amd64 -p functional-888658 cache reload
functional_test.go:1178: (dbg) Run:  out/minikube-linux-amd64 -p functional-888658 ssh sudo crictl inspecti registry.k8s.io/pause:latest
--- PASS: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/CacheCmd/cache/cache_reload (1.08s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/CacheCmd/cache/delete (0.13s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/CacheCmd/cache/delete
functional_test.go:1187: (dbg) Run:  out/minikube-linux-amd64 cache delete registry.k8s.io/pause:3.1
functional_test.go:1187: (dbg) Run:  out/minikube-linux-amd64 cache delete registry.k8s.io/pause:latest
--- PASS: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/CacheCmd/cache/delete (0.13s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/MinikubeKubectlCmd (0.13s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/MinikubeKubectlCmd
functional_test.go:731: (dbg) Run:  out/minikube-linux-amd64 -p functional-888658 kubectl -- --context functional-888658 get pods
--- PASS: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/MinikubeKubectlCmd (0.13s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/MinikubeKubectlCmdDirectly (0.12s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/MinikubeKubectlCmdDirectly
functional_test.go:756: (dbg) Run:  out/kubectl --context functional-888658 get pods
--- PASS: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/MinikubeKubectlCmdDirectly (0.12s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/ExtraConfig (55.51s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/ExtraConfig
functional_test.go:772: (dbg) Run:  out/minikube-linux-amd64 start -p functional-888658 --extra-config=apiserver.enable-admission-plugins=NamespaceAutoProvision --wait=all
E1213 08:43:05.050204   13307 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22128-9390/.minikube/profiles/addons-527167/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
functional_test.go:772: (dbg) Done: out/minikube-linux-amd64 start -p functional-888658 --extra-config=apiserver.enable-admission-plugins=NamespaceAutoProvision --wait=all: (55.505680586s)
functional_test.go:776: restart took 55.505857417s for "functional-888658" cluster.
I1213 08:43:37.828693   13307 config.go:182] Loaded profile config "functional-888658": Driver=kvm2, ContainerRuntime=docker, KubernetesVersion=v1.35.0-beta.0
--- PASS: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/ExtraConfig (55.51s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/ComponentHealth (0.07s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/ComponentHealth
functional_test.go:825: (dbg) Run:  kubectl --context functional-888658 get po -l tier=control-plane -n kube-system -o=json
functional_test.go:840: etcd phase: Running
functional_test.go:850: etcd status: Ready
functional_test.go:840: kube-apiserver phase: Running
functional_test.go:850: kube-apiserver status: Ready
functional_test.go:840: kube-controller-manager phase: Running
functional_test.go:850: kube-controller-manager status: Ready
functional_test.go:840: kube-scheduler phase: Running
functional_test.go:850: kube-scheduler status: Ready
--- PASS: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/ComponentHealth (0.07s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/LogsCmd (1.09s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/LogsCmd
functional_test.go:1251: (dbg) Run:  out/minikube-linux-amd64 -p functional-888658 logs
functional_test.go:1251: (dbg) Done: out/minikube-linux-amd64 -p functional-888658 logs: (1.088606779s)
--- PASS: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/LogsCmd (1.09s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/LogsFileCmd (1.11s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/LogsFileCmd
functional_test.go:1265: (dbg) Run:  out/minikube-linux-amd64 -p functional-888658 logs --file /tmp/TestFunctionalNewestKubernetesVersionv1.35.0-beta.0serialLogs1852572768/001/logs.txt
functional_test.go:1265: (dbg) Done: out/minikube-linux-amd64 -p functional-888658 logs --file /tmp/TestFunctionalNewestKubernetesVersionv1.35.0-beta.0serialLogs1852572768/001/logs.txt: (1.106460267s)
--- PASS: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/LogsFileCmd (1.11s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/InvalidService (4.65s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/InvalidService
functional_test.go:2326: (dbg) Run:  kubectl --context functional-888658 apply -f testdata/invalidsvc.yaml
functional_test.go:2340: (dbg) Run:  out/minikube-linux-amd64 service invalid-svc -p functional-888658
functional_test.go:2340: (dbg) Non-zero exit: out/minikube-linux-amd64 service invalid-svc -p functional-888658: exit status 115 (253.534997ms)

                                                
                                                
-- stdout --
	┌───────────┬─────────────┬─────────────┬────────────────────────────┐
	│ NAMESPACE │    NAME     │ TARGET PORT │            URL             │
	├───────────┼─────────────┼─────────────┼────────────────────────────┤
	│ default   │ invalid-svc │ 80          │ http://192.168.39.29:32506 │
	└───────────┴─────────────┴─────────────┴────────────────────────────┘
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to SVC_UNREACHABLE: service not available: no running pod for service invalid-svc found
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_service_96b204199e3191fa1740d4430b018a3c8028d52d_0.log                 │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
functional_test.go:2332: (dbg) Run:  kubectl --context functional-888658 delete -f testdata/invalidsvc.yaml
functional_test.go:2332: (dbg) Done: kubectl --context functional-888658 delete -f testdata/invalidsvc.yaml: (1.181256415s)
--- PASS: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/InvalidService (4.65s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ConfigCmd (0.44s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ConfigCmd
=== PAUSE TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ConfigCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ConfigCmd
functional_test.go:1214: (dbg) Run:  out/minikube-linux-amd64 -p functional-888658 config unset cpus
functional_test.go:1214: (dbg) Run:  out/minikube-linux-amd64 -p functional-888658 config get cpus
functional_test.go:1214: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-888658 config get cpus: exit status 14 (66.104222ms)

                                                
                                                
** stderr ** 
	Error: specified key could not be found in config

                                                
                                                
** /stderr **
functional_test.go:1214: (dbg) Run:  out/minikube-linux-amd64 -p functional-888658 config set cpus 2
functional_test.go:1214: (dbg) Run:  out/minikube-linux-amd64 -p functional-888658 config get cpus
functional_test.go:1214: (dbg) Run:  out/minikube-linux-amd64 -p functional-888658 config unset cpus
functional_test.go:1214: (dbg) Run:  out/minikube-linux-amd64 -p functional-888658 config get cpus
functional_test.go:1214: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-888658 config get cpus: exit status 14 (71.810147ms)

                                                
                                                
** stderr ** 
	Error: specified key could not be found in config

                                                
                                                
** /stderr **
--- PASS: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ConfigCmd (0.44s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/DashboardCmd (47.24s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/DashboardCmd
=== PAUSE TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/DashboardCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/DashboardCmd
functional_test.go:920: (dbg) daemon: [out/minikube-linux-amd64 dashboard --url --port 36195 -p functional-888658 --alsologtostderr -v=1]
functional_test.go:925: (dbg) stopping [out/minikube-linux-amd64 dashboard --url --port 36195 -p functional-888658 --alsologtostderr -v=1] ...
helpers_test.go:526: unable to kill pid 22263: os: process already finished
--- PASS: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/DashboardCmd (47.24s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/DryRun (0.27s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/DryRun
=== PAUSE TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/DryRun

                                                
                                                

                                                
                                                
=== CONT  TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/DryRun
functional_test.go:989: (dbg) Run:  out/minikube-linux-amd64 start -p functional-888658 --dry-run --memory 250MB --alsologtostderr --driver=kvm2  --kubernetes-version=v1.35.0-beta.0
functional_test.go:989: (dbg) Non-zero exit: out/minikube-linux-amd64 start -p functional-888658 --dry-run --memory 250MB --alsologtostderr --driver=kvm2  --kubernetes-version=v1.35.0-beta.0: exit status 23 (131.71115ms)

                                                
                                                
-- stdout --
	* [functional-888658] minikube v1.37.0 on Ubuntu 22.04 (kvm/amd64)
	  - MINIKUBE_LOCATION=22128
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/22128-9390/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/22128-9390/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-amd64
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the kvm2 driver based on existing profile
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1213 08:43:55.867231   22076 out.go:360] Setting OutFile to fd 1 ...
	I1213 08:43:55.867553   22076 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1213 08:43:55.867564   22076 out.go:374] Setting ErrFile to fd 2...
	I1213 08:43:55.867568   22076 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1213 08:43:55.867761   22076 root.go:338] Updating PATH: /home/jenkins/minikube-integration/22128-9390/.minikube/bin
	I1213 08:43:55.868250   22076 out.go:368] Setting JSON to false
	I1213 08:43:55.869135   22076 start.go:133] hostinfo: {"hostname":"ubuntu-20-agent-3","uptime":1586,"bootTime":1765613850,"procs":182,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"22.04","kernelVersion":"6.8.0-1045-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I1213 08:43:55.869199   22076 start.go:143] virtualization: kvm guest
	I1213 08:43:55.871308   22076 out.go:179] * [functional-888658] minikube v1.37.0 on Ubuntu 22.04 (kvm/amd64)
	I1213 08:43:55.872797   22076 notify.go:221] Checking for updates...
	I1213 08:43:55.873632   22076 out.go:179]   - MINIKUBE_LOCATION=22128
	I1213 08:43:55.875196   22076 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1213 08:43:55.876785   22076 out.go:179]   - KUBECONFIG=/home/jenkins/minikube-integration/22128-9390/kubeconfig
	I1213 08:43:55.878275   22076 out.go:179]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/22128-9390/.minikube
	I1213 08:43:55.879455   22076 out.go:179]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I1213 08:43:55.880732   22076 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I1213 08:43:55.882495   22076 config.go:182] Loaded profile config "functional-888658": Driver=kvm2, ContainerRuntime=docker, KubernetesVersion=v1.35.0-beta.0
	I1213 08:43:55.883249   22076 driver.go:422] Setting default libvirt URI to qemu:///system
	I1213 08:43:55.920436   22076 out.go:179] * Using the kvm2 driver based on existing profile
	I1213 08:43:55.921546   22076 start.go:309] selected driver: kvm2
	I1213 08:43:55.921566   22076 start.go:927] validating driver "kvm2" against &{Name:functional-888658 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/22101/minikube-v1.37.0-1765481609-22101-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765275396-22083@sha256:ffa93f7bad1d2c0a7acfa6e97f1eec0e4955680d08c3904e49db297a10f7f89f Memory:4096 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kuber
netesVersion:v1.35.0-beta.0 ClusterName:functional-888658 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:apiserver Key:enable-admission-plugins Value:NamespaceAutoProvision}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.29 Port:8441 KubernetesVersion:v1.35.0-beta.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpirat
ion:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1213 08:43:55.921722   22076 start.go:938] status for kvm2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1213 08:43:55.924328   22076 out.go:203] 
	W1213 08:43:55.925448   22076 out.go:285] X Exiting due to RSRC_INSUFFICIENT_REQ_MEMORY: Requested memory allocation 250MiB is less than the usable minimum of 1800MB
	X Exiting due to RSRC_INSUFFICIENT_REQ_MEMORY: Requested memory allocation 250MiB is less than the usable minimum of 1800MB
	I1213 08:43:55.926456   22076 out.go:203] 

                                                
                                                
** /stderr **
functional_test.go:1006: (dbg) Run:  out/minikube-linux-amd64 start -p functional-888658 --dry-run --alsologtostderr -v=1 --driver=kvm2  --kubernetes-version=v1.35.0-beta.0
--- PASS: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/DryRun (0.27s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/InternationalLanguage (0.15s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/InternationalLanguage
=== PAUSE TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/InternationalLanguage

                                                
                                                

                                                
                                                
=== CONT  TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/InternationalLanguage
functional_test.go:1035: (dbg) Run:  out/minikube-linux-amd64 start -p functional-888658 --dry-run --memory 250MB --alsologtostderr --driver=kvm2  --kubernetes-version=v1.35.0-beta.0
functional_test.go:1035: (dbg) Non-zero exit: out/minikube-linux-amd64 start -p functional-888658 --dry-run --memory 250MB --alsologtostderr --driver=kvm2  --kubernetes-version=v1.35.0-beta.0: exit status 23 (152.04172ms)

                                                
                                                
-- stdout --
	* [functional-888658] minikube v1.37.0 sur Ubuntu 22.04 (kvm/amd64)
	  - MINIKUBE_LOCATION=22128
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/22128-9390/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/22128-9390/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-amd64
	  - MINIKUBE_FORCE_SYSTEMD=
	* Utilisation du pilote kvm2 basé sur le profil existant
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1213 08:43:55.730842   22026 out.go:360] Setting OutFile to fd 1 ...
	I1213 08:43:55.731124   22026 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1213 08:43:55.731136   22026 out.go:374] Setting ErrFile to fd 2...
	I1213 08:43:55.731145   22026 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1213 08:43:55.731657   22026 root.go:338] Updating PATH: /home/jenkins/minikube-integration/22128-9390/.minikube/bin
	I1213 08:43:55.732431   22026 out.go:368] Setting JSON to false
	I1213 08:43:55.733684   22026 start.go:133] hostinfo: {"hostname":"ubuntu-20-agent-3","uptime":1586,"bootTime":1765613850,"procs":182,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"22.04","kernelVersion":"6.8.0-1045-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I1213 08:43:55.733751   22026 start.go:143] virtualization: kvm guest
	I1213 08:43:55.738154   22026 out.go:179] * [functional-888658] minikube v1.37.0 sur Ubuntu 22.04 (kvm/amd64)
	I1213 08:43:55.740044   22026 out.go:179]   - MINIKUBE_LOCATION=22128
	I1213 08:43:55.740061   22026 notify.go:221] Checking for updates...
	I1213 08:43:55.742391   22026 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1213 08:43:55.743600   22026 out.go:179]   - KUBECONFIG=/home/jenkins/minikube-integration/22128-9390/kubeconfig
	I1213 08:43:55.745183   22026 out.go:179]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/22128-9390/.minikube
	I1213 08:43:55.746408   22026 out.go:179]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I1213 08:43:55.747479   22026 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I1213 08:43:55.748953   22026 config.go:182] Loaded profile config "functional-888658": Driver=kvm2, ContainerRuntime=docker, KubernetesVersion=v1.35.0-beta.0
	I1213 08:43:55.749473   22026 driver.go:422] Setting default libvirt URI to qemu:///system
	I1213 08:43:55.788481   22026 out.go:179] * Utilisation du pilote kvm2 basé sur le profil existant
	I1213 08:43:55.789796   22026 start.go:309] selected driver: kvm2
	I1213 08:43:55.789812   22026 start.go:927] validating driver "kvm2" against &{Name:functional-888658 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/22101/minikube-v1.37.0-1765481609-22101-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765275396-22083@sha256:ffa93f7bad1d2c0a7acfa6e97f1eec0e4955680d08c3904e49db297a10f7f89f Memory:4096 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kuber
netesVersion:v1.35.0-beta.0 ClusterName:functional-888658 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:apiserver Key:enable-admission-plugins Value:NamespaceAutoProvision}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.29 Port:8441 KubernetesVersion:v1.35.0-beta.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpirat
ion:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1213 08:43:55.789926   22026 start.go:938] status for kvm2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1213 08:43:55.792009   22026 out.go:203] 
	W1213 08:43:55.793148   22026 out.go:285] X Fermeture en raison de RSRC_INSUFFICIENT_REQ_MEMORY : L'allocation de mémoire demandée 250 Mio est inférieure au minimum utilisable de 1800 Mo
	X Fermeture en raison de RSRC_INSUFFICIENT_REQ_MEMORY : L'allocation de mémoire demandée 250 Mio est inférieure au minimum utilisable de 1800 Mo
	I1213 08:43:55.794327   22026 out.go:203] 

                                                
                                                
** /stderr **
--- PASS: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/InternationalLanguage (0.15s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/StatusCmd (0.88s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/StatusCmd
=== PAUSE TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/StatusCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/StatusCmd
functional_test.go:869: (dbg) Run:  out/minikube-linux-amd64 -p functional-888658 status
functional_test.go:875: (dbg) Run:  out/minikube-linux-amd64 -p functional-888658 status -f host:{{.Host}},kublet:{{.Kubelet}},apiserver:{{.APIServer}},kubeconfig:{{.Kubeconfig}}
functional_test.go:887: (dbg) Run:  out/minikube-linux-amd64 -p functional-888658 status -o json
--- PASS: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/StatusCmd (0.88s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ServiceCmdConnect (9.58s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ServiceCmdConnect
=== PAUSE TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ServiceCmdConnect

                                                
                                                

                                                
                                                
=== CONT  TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ServiceCmdConnect
functional_test.go:1636: (dbg) Run:  kubectl --context functional-888658 create deployment hello-node-connect --image kicbase/echo-server
functional_test.go:1640: (dbg) Run:  kubectl --context functional-888658 expose deployment hello-node-connect --type=NodePort --port=8080
functional_test.go:1645: (dbg) TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ServiceCmdConnect: waiting 10m0s for pods matching "app=hello-node-connect" in namespace "default" ...
helpers_test.go:353: "hello-node-connect-9f67c86d4-2mmb7" [df33cd81-0a9f-47e6-a012-d250dc26e1ba] Pending / Ready:ContainersNotReady (containers with unready status: [echo-server]) / ContainersReady:ContainersNotReady (containers with unready status: [echo-server])
helpers_test.go:353: "hello-node-connect-9f67c86d4-2mmb7" [df33cd81-0a9f-47e6-a012-d250dc26e1ba] Running
functional_test.go:1645: (dbg) TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ServiceCmdConnect: app=hello-node-connect healthy within 9.007408273s
functional_test.go:1654: (dbg) Run:  out/minikube-linux-amd64 -p functional-888658 service hello-node-connect --url
functional_test.go:1660: found endpoint for hello-node-connect: http://192.168.39.29:30409
functional_test.go:1680: http://192.168.39.29:30409: success! body:
Request served by hello-node-connect-9f67c86d4-2mmb7

                                                
                                                
HTTP/1.1 GET /

                                                
                                                
Host: 192.168.39.29:30409
Accept-Encoding: gzip
User-Agent: Go-http-client/1.1
--- PASS: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ServiceCmdConnect (9.58s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/AddonsCmd (0.18s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/AddonsCmd
=== PAUSE TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/AddonsCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/AddonsCmd
functional_test.go:1695: (dbg) Run:  out/minikube-linux-amd64 -p functional-888658 addons list
functional_test.go:1707: (dbg) Run:  out/minikube-linux-amd64 -p functional-888658 addons list -o json
--- PASS: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/AddonsCmd (0.18s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim (31.22s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim
=== PAUSE TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim

                                                
                                                

                                                
                                                
=== CONT  TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim
functional_test_pvc_test.go:50: (dbg) TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: waiting 4m0s for pods matching "integration-test=storage-provisioner" in namespace "kube-system" ...
helpers_test.go:353: "storage-provisioner" [0e5edf6a-4a19-4157-88e0-3c3cb10a7023] Running
functional_test_pvc_test.go:50: (dbg) TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: integration-test=storage-provisioner healthy within 6.004594294s
functional_test_pvc_test.go:55: (dbg) Run:  kubectl --context functional-888658 get storageclass -o=json
functional_test_pvc_test.go:75: (dbg) Run:  kubectl --context functional-888658 apply -f testdata/storage-provisioner/pvc.yaml
functional_test_pvc_test.go:82: (dbg) Run:  kubectl --context functional-888658 get pvc myclaim -o=json
I1213 08:43:51.133069   13307 retry.go:31] will retry after 2.594695358s: testpvc phase = "Pending", want "Bound" (msg={TypeMeta:{Kind:PersistentVolumeClaim APIVersion:v1} ObjectMeta:{Name:myclaim GenerateName: Namespace:default SelfLink: UID:23ac24d9-017d-4b62-a9b5-a9d99e1b2d0c ResourceVersion:772 Generation:0 CreationTimestamp:2025-12-13 08:43:51 +0000 UTC DeletionTimestamp:<nil> DeletionGracePeriodSeconds:<nil> Labels:map[] Annotations:map[kubectl.kubernetes.io/last-applied-configuration:{"apiVersion":"v1","kind":"PersistentVolumeClaim","metadata":{"annotations":{},"name":"myclaim","namespace":"default"},"spec":{"accessModes":["ReadWriteOnce"],"resources":{"requests":{"storage":"500Mi"}},"volumeMode":"Filesystem"}}
volume.beta.kubernetes.io/storage-provisioner:k8s.io/minikube-hostpath volume.kubernetes.io/storage-provisioner:k8s.io/minikube-hostpath] OwnerReferences:[] Finalizers:[kubernetes.io/pvc-protection] ManagedFields:[]} Spec:{AccessModes:[ReadWriteOnce] Selector:nil Resources:{Limits:map[] Requests:map[storage:{i:{value:524288000 scale:0} d:{Dec:<nil>} s:500Mi Format:BinarySI}]} VolumeName: StorageClassName:0xc001908ea0 VolumeMode:0xc001908eb0 DataSource:nil DataSourceRef:nil VolumeAttributesClassName:<nil>} Status:{Phase:Pending AccessModes:[] Capacity:map[] Conditions:[] AllocatedResources:map[] AllocatedResourceStatuses:map[] CurrentVolumeAttributesClassName:<nil> ModifyVolumeStatus:nil}})
functional_test_pvc_test.go:82: (dbg) Run:  kubectl --context functional-888658 get pvc myclaim -o=json
functional_test_pvc_test.go:131: (dbg) Run:  kubectl --context functional-888658 apply -f testdata/storage-provisioner/pod.yaml
I1213 08:43:53.948853   13307 detect.go:223] nested VM detected
functional_test_pvc_test.go:140: (dbg) TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: waiting 6m0s for pods matching "test=storage-provisioner" in namespace "default" ...
helpers_test.go:353: "sp-pod" [48c6fda0-7959-4bbb-afd0-7324ba1c9c60] Pending
helpers_test.go:353: "sp-pod" [48c6fda0-7959-4bbb-afd0-7324ba1c9c60] Pending / Ready:ContainersNotReady (containers with unready status: [myfrontend]) / ContainersReady:ContainersNotReady (containers with unready status: [myfrontend])
helpers_test.go:353: "sp-pod" [48c6fda0-7959-4bbb-afd0-7324ba1c9c60] Running
functional_test_pvc_test.go:140: (dbg) TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: test=storage-provisioner healthy within 13.005826965s
functional_test_pvc_test.go:106: (dbg) Run:  kubectl --context functional-888658 exec sp-pod -- touch /tmp/mount/foo
functional_test_pvc_test.go:112: (dbg) Run:  kubectl --context functional-888658 delete -f testdata/storage-provisioner/pod.yaml
functional_test_pvc_test.go:112: (dbg) Done: kubectl --context functional-888658 delete -f testdata/storage-provisioner/pod.yaml: (1.53965209s)
functional_test_pvc_test.go:131: (dbg) Run:  kubectl --context functional-888658 apply -f testdata/storage-provisioner/pod.yaml
I1213 08:44:08.850458   13307 detect.go:223] nested VM detected
functional_test_pvc_test.go:140: (dbg) TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: waiting 6m0s for pods matching "test=storage-provisioner" in namespace "default" ...
helpers_test.go:353: "sp-pod" [8535fdae-1217-463a-afe3-f7bc6c1af9ee] Pending
helpers_test.go:353: "sp-pod" [8535fdae-1217-463a-afe3-f7bc6c1af9ee] Pending / Ready:ContainersNotReady (containers with unready status: [myfrontend]) / ContainersReady:ContainersNotReady (containers with unready status: [myfrontend])
helpers_test.go:353: "sp-pod" [8535fdae-1217-463a-afe3-f7bc6c1af9ee] Running
functional_test_pvc_test.go:140: (dbg) TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: test=storage-provisioner healthy within 7.006161792s
functional_test_pvc_test.go:120: (dbg) Run:  kubectl --context functional-888658 exec sp-pod -- ls /tmp/mount
E1213 08:44:18.852540   13307 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22128-9390/.minikube/profiles/functional-016924/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1213 08:44:18.859060   13307 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22128-9390/.minikube/profiles/functional-016924/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1213 08:44:18.870625   13307 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22128-9390/.minikube/profiles/functional-016924/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1213 08:44:18.892132   13307 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22128-9390/.minikube/profiles/functional-016924/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1213 08:44:18.933592   13307 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22128-9390/.minikube/profiles/functional-016924/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1213 08:44:19.015117   13307 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22128-9390/.minikube/profiles/functional-016924/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1213 08:44:19.176665   13307 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22128-9390/.minikube/profiles/functional-016924/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1213 08:44:19.498498   13307 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22128-9390/.minikube/profiles/functional-016924/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1213 08:44:20.140364   13307 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22128-9390/.minikube/profiles/functional-016924/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1213 08:44:21.422230   13307 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22128-9390/.minikube/profiles/functional-016924/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1213 08:44:23.983585   13307 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22128-9390/.minikube/profiles/functional-016924/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
--- PASS: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim (31.22s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/SSHCmd (0.37s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/SSHCmd
=== PAUSE TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/SSHCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/SSHCmd
functional_test.go:1730: (dbg) Run:  out/minikube-linux-amd64 -p functional-888658 ssh "echo hello"
functional_test.go:1747: (dbg) Run:  out/minikube-linux-amd64 -p functional-888658 ssh "cat /etc/hostname"
--- PASS: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/SSHCmd (0.37s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/CpCmd (1.26s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/CpCmd
=== PAUSE TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/CpCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/CpCmd
helpers_test.go:574: (dbg) Run:  out/minikube-linux-amd64 -p functional-888658 cp testdata/cp-test.txt /home/docker/cp-test.txt
helpers_test.go:552: (dbg) Run:  out/minikube-linux-amd64 -p functional-888658 ssh -n functional-888658 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:574: (dbg) Run:  out/minikube-linux-amd64 -p functional-888658 cp functional-888658:/home/docker/cp-test.txt /tmp/TestFunctionalNewestKubernetesVersionv1.35.0-beta.0parallelCp159785023/001/cp-test.txt
helpers_test.go:552: (dbg) Run:  out/minikube-linux-amd64 -p functional-888658 ssh -n functional-888658 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:574: (dbg) Run:  out/minikube-linux-amd64 -p functional-888658 cp testdata/cp-test.txt /tmp/does/not/exist/cp-test.txt
helpers_test.go:552: (dbg) Run:  out/minikube-linux-amd64 -p functional-888658 ssh -n functional-888658 "sudo cat /tmp/does/not/exist/cp-test.txt"
--- PASS: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/CpCmd (1.26s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/MySQL (45.47s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/MySQL
=== PAUSE TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/MySQL

                                                
                                                

                                                
                                                
=== CONT  TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/MySQL
functional_test.go:1798: (dbg) Run:  kubectl --context functional-888658 replace --force -f testdata/mysql.yaml
functional_test.go:1804: (dbg) TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/MySQL: waiting 10m0s for pods matching "app=mysql" in namespace "default" ...
helpers_test.go:353: "mysql-7d7b65bc95-n5pfl" [bbd6ea44-fab8-45da-90ca-5a6b5161d7c1] Pending / Ready:ContainersNotReady (containers with unready status: [mysql]) / ContainersReady:ContainersNotReady (containers with unready status: [mysql])
helpers_test.go:353: "mysql-7d7b65bc95-n5pfl" [bbd6ea44-fab8-45da-90ca-5a6b5161d7c1] Running
E1213 08:44:29.105677   13307 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22128-9390/.minikube/profiles/functional-016924/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
functional_test.go:1804: (dbg) TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/MySQL: app=mysql healthy within 33.246749898s
functional_test.go:1812: (dbg) Run:  kubectl --context functional-888658 exec mysql-7d7b65bc95-n5pfl -- mysql -ppassword -e "show databases;"
functional_test.go:1812: (dbg) Non-zero exit: kubectl --context functional-888658 exec mysql-7d7b65bc95-n5pfl -- mysql -ppassword -e "show databases;": exit status 1 (234.255981ms)

                                                
                                                
** stderr ** 
	mysql: [Warning] Using a password on the command line interface can be insecure.
	ERROR 2002 (HY000): Can't connect to local MySQL server through socket '/var/run/mysqld/mysqld.sock' (2)
	command terminated with exit code 1

                                                
                                                
** /stderr **
I1213 08:44:31.227942   13307 retry.go:31] will retry after 1.067418659s: exit status 1
functional_test.go:1812: (dbg) Run:  kubectl --context functional-888658 exec mysql-7d7b65bc95-n5pfl -- mysql -ppassword -e "show databases;"
functional_test.go:1812: (dbg) Non-zero exit: kubectl --context functional-888658 exec mysql-7d7b65bc95-n5pfl -- mysql -ppassword -e "show databases;": exit status 1 (364.091069ms)

                                                
                                                
** stderr ** 
	mysql: [Warning] Using a password on the command line interface can be insecure.
	ERROR 2002 (HY000): Can't connect to local MySQL server through socket '/var/run/mysqld/mysqld.sock' (2)
	command terminated with exit code 1

                                                
                                                
** /stderr **
I1213 08:44:32.660503   13307 retry.go:31] will retry after 1.981358545s: exit status 1
functional_test.go:1812: (dbg) Run:  kubectl --context functional-888658 exec mysql-7d7b65bc95-n5pfl -- mysql -ppassword -e "show databases;"
functional_test.go:1812: (dbg) Non-zero exit: kubectl --context functional-888658 exec mysql-7d7b65bc95-n5pfl -- mysql -ppassword -e "show databases;": exit status 1 (219.289393ms)

                                                
                                                
** stderr ** 
	mysql: [Warning] Using a password on the command line interface can be insecure.
	ERROR 1045 (28000): Access denied for user 'root'@'localhost' (using password: YES)
	command terminated with exit code 1

                                                
                                                
** /stderr **
I1213 08:44:34.863155   13307 retry.go:31] will retry after 3.223688207s: exit status 1
functional_test.go:1812: (dbg) Run:  kubectl --context functional-888658 exec mysql-7d7b65bc95-n5pfl -- mysql -ppassword -e "show databases;"
functional_test.go:1812: (dbg) Non-zero exit: kubectl --context functional-888658 exec mysql-7d7b65bc95-n5pfl -- mysql -ppassword -e "show databases;": exit status 1 (158.882628ms)

                                                
                                                
** stderr ** 
	mysql: [Warning] Using a password on the command line interface can be insecure.
	ERROR 2002 (HY000): Can't connect to local MySQL server through socket '/var/run/mysqld/mysqld.sock' (2)
	command terminated with exit code 1

                                                
                                                
** /stderr **
I1213 08:44:38.246592   13307 retry.go:31] will retry after 4.532398366s: exit status 1
E1213 08:44:39.347028   13307 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22128-9390/.minikube/profiles/functional-016924/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
functional_test.go:1812: (dbg) Run:  kubectl --context functional-888658 exec mysql-7d7b65bc95-n5pfl -- mysql -ppassword -e "show databases;"
2025/12/13 08:44:43 [DEBUG] GET http://127.0.0.1:36195/api/v1/namespaces/kubernetes-dashboard/services/http:kubernetes-dashboard:/proxy/
--- PASS: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/MySQL (45.47s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/FileSync (0.23s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/FileSync
=== PAUSE TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/FileSync

                                                
                                                

                                                
                                                
=== CONT  TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/FileSync
functional_test.go:1934: Checking for existence of /etc/test/nested/copy/13307/hosts within VM
functional_test.go:1936: (dbg) Run:  out/minikube-linux-amd64 -p functional-888658 ssh "sudo cat /etc/test/nested/copy/13307/hosts"
functional_test.go:1941: file sync test content: Test file for checking file sync process
--- PASS: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/FileSync (0.23s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/CertSync (1.19s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/CertSync
=== PAUSE TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/CertSync

                                                
                                                

                                                
                                                
=== CONT  TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/CertSync
functional_test.go:1977: Checking for existence of /etc/ssl/certs/13307.pem within VM
functional_test.go:1978: (dbg) Run:  out/minikube-linux-amd64 -p functional-888658 ssh "sudo cat /etc/ssl/certs/13307.pem"
functional_test.go:1977: Checking for existence of /usr/share/ca-certificates/13307.pem within VM
functional_test.go:1978: (dbg) Run:  out/minikube-linux-amd64 -p functional-888658 ssh "sudo cat /usr/share/ca-certificates/13307.pem"
functional_test.go:1977: Checking for existence of /etc/ssl/certs/51391683.0 within VM
functional_test.go:1978: (dbg) Run:  out/minikube-linux-amd64 -p functional-888658 ssh "sudo cat /etc/ssl/certs/51391683.0"
functional_test.go:2004: Checking for existence of /etc/ssl/certs/133072.pem within VM
functional_test.go:2005: (dbg) Run:  out/minikube-linux-amd64 -p functional-888658 ssh "sudo cat /etc/ssl/certs/133072.pem"
functional_test.go:2004: Checking for existence of /usr/share/ca-certificates/133072.pem within VM
functional_test.go:2005: (dbg) Run:  out/minikube-linux-amd64 -p functional-888658 ssh "sudo cat /usr/share/ca-certificates/133072.pem"
functional_test.go:2004: Checking for existence of /etc/ssl/certs/3ec20f2e.0 within VM
functional_test.go:2005: (dbg) Run:  out/minikube-linux-amd64 -p functional-888658 ssh "sudo cat /etc/ssl/certs/3ec20f2e.0"
--- PASS: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/CertSync (1.19s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/NodeLabels (0.08s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/NodeLabels
=== PAUSE TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/NodeLabels

                                                
                                                

                                                
                                                
=== CONT  TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/NodeLabels
functional_test.go:234: (dbg) Run:  kubectl --context functional-888658 get nodes --output=go-template "--template='{{range $k, $v := (index .items 0).metadata.labels}}{{$k}} {{end}}'"
--- PASS: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/NodeLabels (0.08s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/NonActiveRuntimeDisabled (0.24s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/NonActiveRuntimeDisabled
=== PAUSE TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/NonActiveRuntimeDisabled

                                                
                                                

                                                
                                                
=== CONT  TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/NonActiveRuntimeDisabled
functional_test.go:2032: (dbg) Run:  out/minikube-linux-amd64 -p functional-888658 ssh "sudo systemctl is-active crio"
functional_test.go:2032: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-888658 ssh "sudo systemctl is-active crio": exit status 1 (240.721235ms)

                                                
                                                
-- stdout --
	inactive

                                                
                                                
-- /stdout --
** stderr ** 
	ssh: Process exited with status 3

                                                
                                                
** /stderr **
--- PASS: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/NonActiveRuntimeDisabled (0.24s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/License (0.38s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/License
=== PAUSE TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/License

                                                
                                                

                                                
                                                
=== CONT  TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/License
functional_test.go:2293: (dbg) Run:  out/minikube-linux-amd64 license
--- PASS: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/License (0.38s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ServiceCmd/DeployApp (9.22s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ServiceCmd/DeployApp
functional_test.go:1451: (dbg) Run:  kubectl --context functional-888658 create deployment hello-node --image kicbase/echo-server
functional_test.go:1455: (dbg) Run:  kubectl --context functional-888658 expose deployment hello-node --type=NodePort --port=8080
functional_test.go:1460: (dbg) TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ServiceCmd/DeployApp: waiting 10m0s for pods matching "app=hello-node" in namespace "default" ...
helpers_test.go:353: "hello-node-5758569b79-2hxss" [f192594d-8979-4537-b59c-60b8a3654614] Pending / Ready:ContainersNotReady (containers with unready status: [echo-server]) / ContainersReady:ContainersNotReady (containers with unready status: [echo-server])
helpers_test.go:353: "hello-node-5758569b79-2hxss" [f192594d-8979-4537-b59c-60b8a3654614] Running
functional_test.go:1460: (dbg) TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ServiceCmd/DeployApp: app=hello-node healthy within 9.004884442s
--- PASS: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ServiceCmd/DeployApp (9.22s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ProfileCmd/profile_not_create (0.34s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ProfileCmd/profile_not_create
functional_test.go:1285: (dbg) Run:  out/minikube-linux-amd64 profile lis
functional_test.go:1290: (dbg) Run:  out/minikube-linux-amd64 profile list --output json
--- PASS: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ProfileCmd/profile_not_create (0.34s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ProfileCmd/profile_list (0.33s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ProfileCmd/profile_list
functional_test.go:1325: (dbg) Run:  out/minikube-linux-amd64 profile list
functional_test.go:1330: Took "262.803961ms" to run "out/minikube-linux-amd64 profile list"
functional_test.go:1339: (dbg) Run:  out/minikube-linux-amd64 profile list -l
functional_test.go:1344: Took "63.040829ms" to run "out/minikube-linux-amd64 profile list -l"
--- PASS: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ProfileCmd/profile_list (0.33s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ProfileCmd/profile_json_output (0.33s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ProfileCmd/profile_json_output
functional_test.go:1376: (dbg) Run:  out/minikube-linux-amd64 profile list -o json
functional_test.go:1381: Took "261.479142ms" to run "out/minikube-linux-amd64 profile list -o json"
functional_test.go:1389: (dbg) Run:  out/minikube-linux-amd64 profile list -o json --light
functional_test.go:1394: Took "67.128204ms" to run "out/minikube-linux-amd64 profile list -o json --light"
--- PASS: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ProfileCmd/profile_json_output (0.33s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/MountCmd/any-port (7.22s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/MountCmd/any-port
functional_test_mount_test.go:73: (dbg) daemon: [out/minikube-linux-amd64 mount -p functional-888658 /tmp/TestFunctionalNewestKubernetesVersionv1.35.0-beta.0parallelMo2849309468/001:/mount-9p --alsologtostderr -v=1]
functional_test_mount_test.go:107: wrote "test-1765615427005878069" to /tmp/TestFunctionalNewestKubernetesVersionv1.35.0-beta.0parallelMo2849309468/001/created-by-test
functional_test_mount_test.go:107: wrote "test-1765615427005878069" to /tmp/TestFunctionalNewestKubernetesVersionv1.35.0-beta.0parallelMo2849309468/001/created-by-test-removed-by-pod
functional_test_mount_test.go:107: wrote "test-1765615427005878069" to /tmp/TestFunctionalNewestKubernetesVersionv1.35.0-beta.0parallelMo2849309468/001/test-1765615427005878069
functional_test_mount_test.go:115: (dbg) Run:  out/minikube-linux-amd64 -p functional-888658 ssh "findmnt -T /mount-9p | grep 9p"
functional_test_mount_test.go:115: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-888658 ssh "findmnt -T /mount-9p | grep 9p": exit status 1 (189.251319ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
I1213 08:43:47.195475   13307 retry.go:31] will retry after 664.156643ms: exit status 1
functional_test_mount_test.go:115: (dbg) Run:  out/minikube-linux-amd64 -p functional-888658 ssh "findmnt -T /mount-9p | grep 9p"
functional_test_mount_test.go:129: (dbg) Run:  out/minikube-linux-amd64 -p functional-888658 ssh -- ls -la /mount-9p
functional_test_mount_test.go:133: guest mount directory contents
total 2
-rw-r--r-- 1 docker docker 24 Dec 13 08:43 created-by-test
-rw-r--r-- 1 docker docker 24 Dec 13 08:43 created-by-test-removed-by-pod
-rw-r--r-- 1 docker docker 24 Dec 13 08:43 test-1765615427005878069
functional_test_mount_test.go:137: (dbg) Run:  out/minikube-linux-amd64 -p functional-888658 ssh cat /mount-9p/test-1765615427005878069
functional_test_mount_test.go:148: (dbg) Run:  kubectl --context functional-888658 replace --force -f testdata/busybox-mount-test.yaml
functional_test_mount_test.go:153: (dbg) TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/MountCmd/any-port: waiting 4m0s for pods matching "integration-test=busybox-mount" in namespace "default" ...
helpers_test.go:353: "busybox-mount" [03a5f50a-266e-4f5c-ae64-3e9f15812e87] Pending
helpers_test.go:353: "busybox-mount" [03a5f50a-266e-4f5c-ae64-3e9f15812e87] Pending / Ready:ContainersNotReady (containers with unready status: [mount-munger]) / ContainersReady:ContainersNotReady (containers with unready status: [mount-munger])
helpers_test.go:353: "busybox-mount" [03a5f50a-266e-4f5c-ae64-3e9f15812e87] Pending / Initialized:PodCompleted / Ready:PodCompleted / ContainersReady:PodCompleted
helpers_test.go:353: "busybox-mount" [03a5f50a-266e-4f5c-ae64-3e9f15812e87] Succeeded / Initialized:PodCompleted / Ready:PodCompleted / ContainersReady:PodCompleted
functional_test_mount_test.go:153: (dbg) TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/MountCmd/any-port: integration-test=busybox-mount healthy within 5.003088462s
functional_test_mount_test.go:169: (dbg) Run:  kubectl --context functional-888658 logs busybox-mount
functional_test_mount_test.go:181: (dbg) Run:  out/minikube-linux-amd64 -p functional-888658 ssh stat /mount-9p/created-by-test
functional_test_mount_test.go:181: (dbg) Run:  out/minikube-linux-amd64 -p functional-888658 ssh stat /mount-9p/created-by-pod
functional_test_mount_test.go:90: (dbg) Run:  out/minikube-linux-amd64 -p functional-888658 ssh "sudo umount -f /mount-9p"
functional_test_mount_test.go:94: (dbg) stopping [out/minikube-linux-amd64 mount -p functional-888658 /tmp/TestFunctionalNewestKubernetesVersionv1.35.0-beta.0parallelMo2849309468/001:/mount-9p --alsologtostderr -v=1] ...
--- PASS: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/MountCmd/any-port (7.22s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/MountCmd/specific-port (1.42s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/MountCmd/specific-port
functional_test_mount_test.go:213: (dbg) daemon: [out/minikube-linux-amd64 mount -p functional-888658 /tmp/TestFunctionalNewestKubernetesVersionv1.35.0-beta.0parallelMo221901643/001:/mount-9p --alsologtostderr -v=1 --port 46464]
functional_test_mount_test.go:243: (dbg) Run:  out/minikube-linux-amd64 -p functional-888658 ssh "findmnt -T /mount-9p | grep 9p"
functional_test_mount_test.go:243: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-888658 ssh "findmnt -T /mount-9p | grep 9p": exit status 1 (183.506322ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
I1213 08:43:54.411213   13307 retry.go:31] will retry after 428.708746ms: exit status 1
functional_test_mount_test.go:243: (dbg) Run:  out/minikube-linux-amd64 -p functional-888658 ssh "findmnt -T /mount-9p | grep 9p"
functional_test_mount_test.go:257: (dbg) Run:  out/minikube-linux-amd64 -p functional-888658 ssh -- ls -la /mount-9p
functional_test_mount_test.go:261: guest mount directory contents
total 0
functional_test_mount_test.go:263: (dbg) stopping [out/minikube-linux-amd64 mount -p functional-888658 /tmp/TestFunctionalNewestKubernetesVersionv1.35.0-beta.0parallelMo221901643/001:/mount-9p --alsologtostderr -v=1 --port 46464] ...
functional_test_mount_test.go:264: reading mount text
functional_test_mount_test.go:278: done reading mount text
functional_test_mount_test.go:230: (dbg) Run:  out/minikube-linux-amd64 -p functional-888658 ssh "sudo umount -f /mount-9p"
functional_test_mount_test.go:230: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-888658 ssh "sudo umount -f /mount-9p": exit status 1 (227.527921ms)

                                                
                                                
-- stdout --
	umount: /mount-9p: not mounted.

                                                
                                                
-- /stdout --
** stderr ** 
	ssh: Process exited with status 32

                                                
                                                
** /stderr **
functional_test_mount_test.go:232: "out/minikube-linux-amd64 -p functional-888658 ssh \"sudo umount -f /mount-9p\"": exit status 1
functional_test_mount_test.go:234: (dbg) stopping [out/minikube-linux-amd64 mount -p functional-888658 /tmp/TestFunctionalNewestKubernetesVersionv1.35.0-beta.0parallelMo221901643/001:/mount-9p --alsologtostderr -v=1 --port 46464] ...
--- PASS: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/MountCmd/specific-port (1.42s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ServiceCmd/List (0.32s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ServiceCmd/List
functional_test.go:1469: (dbg) Run:  out/minikube-linux-amd64 -p functional-888658 service list
--- PASS: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ServiceCmd/List (0.32s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ServiceCmd/JSONOutput (0.29s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ServiceCmd/JSONOutput
functional_test.go:1499: (dbg) Run:  out/minikube-linux-amd64 -p functional-888658 service list -o json
functional_test.go:1504: Took "286.489513ms" to run "out/minikube-linux-amd64 -p functional-888658 service list -o json"
--- PASS: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ServiceCmd/JSONOutput (0.29s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ServiceCmd/HTTPS (0.31s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ServiceCmd/HTTPS
functional_test.go:1519: (dbg) Run:  out/minikube-linux-amd64 -p functional-888658 service --namespace=default --https --url hello-node
functional_test.go:1532: found endpoint: https://192.168.39.29:32319
--- PASS: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ServiceCmd/HTTPS (0.31s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ServiceCmd/Format (0.36s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ServiceCmd/Format
functional_test.go:1550: (dbg) Run:  out/minikube-linux-amd64 -p functional-888658 service hello-node --url --format={{.IP}}
--- PASS: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ServiceCmd/Format (0.36s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/MountCmd/VerifyCleanup (1.53s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/MountCmd/VerifyCleanup
functional_test_mount_test.go:298: (dbg) daemon: [out/minikube-linux-amd64 mount -p functional-888658 /tmp/TestFunctionalNewestKubernetesVersionv1.35.0-beta.0parallelMo3715474820/001:/mount1 --alsologtostderr -v=1]
functional_test_mount_test.go:298: (dbg) daemon: [out/minikube-linux-amd64 mount -p functional-888658 /tmp/TestFunctionalNewestKubernetesVersionv1.35.0-beta.0parallelMo3715474820/001:/mount2 --alsologtostderr -v=1]
functional_test_mount_test.go:298: (dbg) daemon: [out/minikube-linux-amd64 mount -p functional-888658 /tmp/TestFunctionalNewestKubernetesVersionv1.35.0-beta.0parallelMo3715474820/001:/mount3 --alsologtostderr -v=1]
functional_test_mount_test.go:325: (dbg) Run:  out/minikube-linux-amd64 -p functional-888658 ssh "findmnt -T" /mount1
functional_test_mount_test.go:325: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-888658 ssh "findmnt -T" /mount1: exit status 1 (242.109173ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
I1213 08:43:55.893095   13307 retry.go:31] will retry after 635.706905ms: exit status 1
functional_test_mount_test.go:325: (dbg) Run:  out/minikube-linux-amd64 -p functional-888658 ssh "findmnt -T" /mount1
functional_test_mount_test.go:325: (dbg) Run:  out/minikube-linux-amd64 -p functional-888658 ssh "findmnt -T" /mount2
functional_test_mount_test.go:325: (dbg) Run:  out/minikube-linux-amd64 -p functional-888658 ssh "findmnt -T" /mount3
functional_test_mount_test.go:370: (dbg) Run:  out/minikube-linux-amd64 mount -p functional-888658 --kill=true
functional_test_mount_test.go:313: (dbg) stopping [out/minikube-linux-amd64 mount -p functional-888658 /tmp/TestFunctionalNewestKubernetesVersionv1.35.0-beta.0parallelMo3715474820/001:/mount1 --alsologtostderr -v=1] ...
helpers_test.go:508: unable to find parent, assuming dead: process does not exist
functional_test_mount_test.go:313: (dbg) stopping [out/minikube-linux-amd64 mount -p functional-888658 /tmp/TestFunctionalNewestKubernetesVersionv1.35.0-beta.0parallelMo3715474820/001:/mount2 --alsologtostderr -v=1] ...
helpers_test.go:508: unable to find parent, assuming dead: process does not exist
functional_test_mount_test.go:313: (dbg) stopping [out/minikube-linux-amd64 mount -p functional-888658 /tmp/TestFunctionalNewestKubernetesVersionv1.35.0-beta.0parallelMo3715474820/001:/mount3 --alsologtostderr -v=1] ...
helpers_test.go:508: unable to find parent, assuming dead: process does not exist
--- PASS: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/MountCmd/VerifyCleanup (1.53s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ServiceCmd/URL (0.53s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ServiceCmd/URL
functional_test.go:1569: (dbg) Run:  out/minikube-linux-amd64 -p functional-888658 service hello-node --url
functional_test.go:1575: found endpoint for hello-node: http://192.168.39.29:32319
--- PASS: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ServiceCmd/URL (0.53s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/Version/short (0.15s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/Version/short
=== PAUSE TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/Version/short

                                                
                                                

                                                
                                                
=== CONT  TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/Version/short
functional_test.go:2261: (dbg) Run:  out/minikube-linux-amd64 -p functional-888658 version --short
--- PASS: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/Version/short (0.15s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/Version/components (0.62s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/Version/components
=== PAUSE TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/Version/components

                                                
                                                

                                                
                                                
=== CONT  TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/Version/components
functional_test.go:2275: (dbg) Run:  out/minikube-linux-amd64 -p functional-888658 version -o=json --components
--- PASS: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/Version/components (0.62s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ImageCommands/ImageListShort (0.25s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ImageCommands/ImageListShort
=== PAUSE TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ImageCommands/ImageListShort

                                                
                                                

                                                
                                                
=== CONT  TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ImageCommands/ImageListShort
functional_test.go:276: (dbg) Run:  out/minikube-linux-amd64 -p functional-888658 image ls --format short --alsologtostderr
functional_test.go:281: (dbg) Stdout: out/minikube-linux-amd64 -p functional-888658 image ls --format short --alsologtostderr:
registry.k8s.io/pause:latest
registry.k8s.io/pause:3.3
registry.k8s.io/pause:3.10.1
registry.k8s.io/pause:3.1
registry.k8s.io/kube-scheduler:v1.35.0-beta.0
registry.k8s.io/kube-proxy:v1.35.0-beta.0
registry.k8s.io/kube-controller-manager:v1.35.0-beta.0
registry.k8s.io/kube-apiserver:v1.35.0-beta.0
registry.k8s.io/etcd:3.6.5-0
registry.k8s.io/coredns/coredns:v1.13.1
public.ecr.aws/nginx/nginx:alpine
gcr.io/k8s-minikube/storage-provisioner:v5
gcr.io/k8s-minikube/busybox:1.28.4-glibc
docker.io/library/minikube-local-cache-test:functional-888658
docker.io/kicbase/echo-server:latest
docker.io/kicbase/echo-server:functional-888658
functional_test.go:284: (dbg) Stderr: out/minikube-linux-amd64 -p functional-888658 image ls --format short --alsologtostderr:
I1213 08:44:07.183263   22613 out.go:360] Setting OutFile to fd 1 ...
I1213 08:44:07.183551   22613 out.go:408] TERM=,COLORTERM=, which probably does not support color
I1213 08:44:07.183563   22613 out.go:374] Setting ErrFile to fd 2...
I1213 08:44:07.183569   22613 out.go:408] TERM=,COLORTERM=, which probably does not support color
I1213 08:44:07.183760   22613 root.go:338] Updating PATH: /home/jenkins/minikube-integration/22128-9390/.minikube/bin
I1213 08:44:07.184438   22613 config.go:182] Loaded profile config "functional-888658": Driver=kvm2, ContainerRuntime=docker, KubernetesVersion=v1.35.0-beta.0
I1213 08:44:07.184564   22613 config.go:182] Loaded profile config "functional-888658": Driver=kvm2, ContainerRuntime=docker, KubernetesVersion=v1.35.0-beta.0
I1213 08:44:07.186768   22613 ssh_runner.go:195] Run: systemctl --version
I1213 08:44:07.189936   22613 main.go:143] libmachine: domain functional-888658 has defined MAC address 52:54:00:10:a9:96 in network mk-functional-888658
I1213 08:44:07.190514   22613 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:10:a9:96", ip: ""} in network mk-functional-888658: {Iface:virbr1 ExpiryTime:2025-12-13 09:40:27 +0000 UTC Type:0 Mac:52:54:00:10:a9:96 Iaid: IPaddr:192.168.39.29 Prefix:24 Hostname:functional-888658 Clientid:01:52:54:00:10:a9:96}
I1213 08:44:07.190545   22613 main.go:143] libmachine: domain functional-888658 has defined IP address 192.168.39.29 and MAC address 52:54:00:10:a9:96 in network mk-functional-888658
I1213 08:44:07.190751   22613 sshutil.go:53] new ssh client: &{IP:192.168.39.29 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/22128-9390/.minikube/machines/functional-888658/id_rsa Username:docker}
I1213 08:44:07.311173   22613 ssh_runner.go:195] Run: docker images --no-trunc --format "{{json .}}"
--- PASS: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ImageCommands/ImageListShort (0.25s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ImageCommands/ImageListTable (0.22s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ImageCommands/ImageListTable
=== PAUSE TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ImageCommands/ImageListTable

                                                
                                                

                                                
                                                
=== CONT  TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ImageCommands/ImageListTable
functional_test.go:276: (dbg) Run:  out/minikube-linux-amd64 -p functional-888658 image ls --format table --alsologtostderr
functional_test.go:281: (dbg) Stdout: out/minikube-linux-amd64 -p functional-888658 image ls --format table --alsologtostderr:
┌─────────────────────────────────────────────┬───────────────────┬───────────────┬────────┐
│                    IMAGE                    │        TAG        │   IMAGE ID    │  SIZE  │
├─────────────────────────────────────────────┼───────────────────┼───────────────┼────────┤
│ registry.k8s.io/pause                       │ latest            │ 350b164e7ae1d │ 240kB  │
│ registry.k8s.io/coredns/coredns             │ v1.13.1           │ aa5e3ebc0dfed │ 78.1MB │
│ registry.k8s.io/etcd                        │ 3.6.5-0           │ a3e246e9556e9 │ 62.5MB │
│ registry.k8s.io/pause                       │ 3.1               │ da86e6ba6ca19 │ 742kB  │
│ docker.io/library/minikube-local-cache-test │ functional-888658 │ f6afca449e011 │ 30B    │
│ public.ecr.aws/nginx/nginx                  │ alpine            │ a236f84b9d5d2 │ 53.7MB │
│ registry.k8s.io/kube-scheduler              │ v1.35.0-beta.0    │ 7bb6219ddab95 │ 51.7MB │
│ registry.k8s.io/pause                       │ 3.10.1            │ cd073f4c5f6a8 │ 736kB  │
│ gcr.io/k8s-minikube/storage-provisioner     │ v5                │ 6e38f40d628db │ 31.5MB │
│ registry.k8s.io/pause                       │ 3.3               │ 0184c1613d929 │ 683kB  │
│ localhost/my-image                          │ functional-888658 │ 55fb9edcb598c │ 1.24MB │
│ registry.k8s.io/kube-apiserver              │ v1.35.0-beta.0    │ aa9d02839d8de │ 89.7MB │
│ registry.k8s.io/kube-proxy                  │ v1.35.0-beta.0    │ 8a4ded35a3eb1 │ 70.7MB │
│ registry.k8s.io/kube-controller-manager     │ v1.35.0-beta.0    │ 45f3cc72d235f │ 75.8MB │
│ docker.io/kicbase/echo-server               │ functional-888658 │ 9056ab77afb8e │ 4.94MB │
│ docker.io/kicbase/echo-server               │ latest            │ 9056ab77afb8e │ 4.94MB │
│ gcr.io/k8s-minikube/busybox                 │ 1.28.4-glibc      │ 56cc512116c8f │ 4.4MB  │
└─────────────────────────────────────────────┴───────────────────┴───────────────┴────────┘
functional_test.go:284: (dbg) Stderr: out/minikube-linux-amd64 -p functional-888658 image ls --format table --alsologtostderr:
I1213 08:44:12.176009   22727 out.go:360] Setting OutFile to fd 1 ...
I1213 08:44:12.176127   22727 out.go:408] TERM=,COLORTERM=, which probably does not support color
I1213 08:44:12.176135   22727 out.go:374] Setting ErrFile to fd 2...
I1213 08:44:12.176139   22727 out.go:408] TERM=,COLORTERM=, which probably does not support color
I1213 08:44:12.176306   22727 root.go:338] Updating PATH: /home/jenkins/minikube-integration/22128-9390/.minikube/bin
I1213 08:44:12.176837   22727 config.go:182] Loaded profile config "functional-888658": Driver=kvm2, ContainerRuntime=docker, KubernetesVersion=v1.35.0-beta.0
I1213 08:44:12.176928   22727 config.go:182] Loaded profile config "functional-888658": Driver=kvm2, ContainerRuntime=docker, KubernetesVersion=v1.35.0-beta.0
I1213 08:44:12.179194   22727 ssh_runner.go:195] Run: systemctl --version
I1213 08:44:12.181598   22727 main.go:143] libmachine: domain functional-888658 has defined MAC address 52:54:00:10:a9:96 in network mk-functional-888658
I1213 08:44:12.182044   22727 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:10:a9:96", ip: ""} in network mk-functional-888658: {Iface:virbr1 ExpiryTime:2025-12-13 09:40:27 +0000 UTC Type:0 Mac:52:54:00:10:a9:96 Iaid: IPaddr:192.168.39.29 Prefix:24 Hostname:functional-888658 Clientid:01:52:54:00:10:a9:96}
I1213 08:44:12.182070   22727 main.go:143] libmachine: domain functional-888658 has defined IP address 192.168.39.29 and MAC address 52:54:00:10:a9:96 in network mk-functional-888658
I1213 08:44:12.182234   22727 sshutil.go:53] new ssh client: &{IP:192.168.39.29 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/22128-9390/.minikube/machines/functional-888658/id_rsa Username:docker}
I1213 08:44:12.281616   22727 ssh_runner.go:195] Run: docker images --no-trunc --format "{{json .}}"
--- PASS: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ImageCommands/ImageListTable (0.22s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ImageCommands/ImageListJson (0.2s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ImageCommands/ImageListJson
=== PAUSE TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ImageCommands/ImageListJson

                                                
                                                

                                                
                                                
=== CONT  TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ImageCommands/ImageListJson
functional_test.go:276: (dbg) Run:  out/minikube-linux-amd64 -p functional-888658 image ls --format json --alsologtostderr
functional_test.go:281: (dbg) Stdout: out/minikube-linux-amd64 -p functional-888658 image ls --format json --alsologtostderr:
[{"id":"9056ab77afb8e18e04303f11000a9d31b3f16b74c59475b899ae1b342d328d30","repoDigests":[],"repoTags":["docker.io/kicbase/echo-server:functional-888658","docker.io/kicbase/echo-server:latest"],"size":"4940000"},{"id":"0184c1613d92931126feb4c548e5da11015513b9e4c104e7305ee8b53b50a9da","repoDigests":[],"repoTags":["registry.k8s.io/pause:3.3"],"size":"683000"},{"id":"56cc512116c8f894f11ce1995460aef1ee0972d48bc2a3bdb1faaac7c020289c","repoDigests":[],"repoTags":["gcr.io/k8s-minikube/busybox:1.28.4-glibc"],"size":"4400000"},{"id":"a236f84b9d5d27fe4bf2bab07501cccdc8e16bb38a41f83e245216bbd2b61b5c","repoDigests":[],"repoTags":["public.ecr.aws/nginx/nginx:alpine"],"size":"53700000"},{"id":"7bb6219ddab95bdabbef83f051bee4fdd14b6f791aaa3121080cb2c58ada2e46","repoDigests":[],"repoTags":["registry.k8s.io/kube-scheduler:v1.35.0-beta.0"],"size":"51700000"},{"id":"8a4ded35a3eb1a80eb49c1a887194460a56b413eed7eb69e59605daf4ec23810","repoDigests":[],"repoTags":["registry.k8s.io/kube-proxy:v1.35.0-beta.0"],"size":"70700000"}
,{"id":"aa5e3ebc0dfed0566805186b9e47110d8f9122291d8bad1497e78873ad291139","repoDigests":[],"repoTags":["registry.k8s.io/coredns/coredns:v1.13.1"],"size":"78100000"},{"id":"da86e6ba6ca197bf6bc5e9d900febd906b133eaa4750e6bed647b0fbe50ed43e","repoDigests":[],"repoTags":["registry.k8s.io/pause:3.1"],"size":"742000"},{"id":"55fb9edcb598c284a20a50968b08dc03ca7e2c4266ab455af16f52bddd43b8d3","repoDigests":[],"repoTags":["localhost/my-image:functional-888658"],"size":"1240000"},{"id":"f6afca449e011ac4564696c2c2f9d78931c875dc1482518037b86a6444783193","repoDigests":[],"repoTags":["docker.io/library/minikube-local-cache-test:functional-888658"],"size":"30"},{"id":"45f3cc72d235f1cfda3de70fe9b2b9d3b356091e491b915f9efd6f0d6e5253bc","repoDigests":[],"repoTags":["registry.k8s.io/kube-controller-manager:v1.35.0-beta.0"],"size":"75800000"},{"id":"6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562","repoDigests":[],"repoTags":["gcr.io/k8s-minikube/storage-provisioner:v5"],"size":"31500000"},{"id":"cd073f4c5f6a8e9dc6
f3125ba00cf60819cae95c1ec84a1f146ee4a9cf9e803f","repoDigests":[],"repoTags":["registry.k8s.io/pause:3.10.1"],"size":"736000"},{"id":"350b164e7ae1dcddeffadd65c76226c9b6dc5553f5179153fb0e36b78f2a5e06","repoDigests":[],"repoTags":["registry.k8s.io/pause:latest"],"size":"240000"},{"id":"aa9d02839d8def718798bd410c88aba69248b26a8f0e3af2c728b512b67cb52b","repoDigests":[],"repoTags":["registry.k8s.io/kube-apiserver:v1.35.0-beta.0"],"size":"89700000"},{"id":"a3e246e9556e93d71e2850085ba581b376c76a9187b4b8a01c120f86579ef2b1","repoDigests":[],"repoTags":["registry.k8s.io/etcd:3.6.5-0"],"size":"62500000"}]
functional_test.go:284: (dbg) Stderr: out/minikube-linux-amd64 -p functional-888658 image ls --format json --alsologtostderr:
I1213 08:44:11.981261   22716 out.go:360] Setting OutFile to fd 1 ...
I1213 08:44:11.981373   22716 out.go:408] TERM=,COLORTERM=, which probably does not support color
I1213 08:44:11.981378   22716 out.go:374] Setting ErrFile to fd 2...
I1213 08:44:11.981382   22716 out.go:408] TERM=,COLORTERM=, which probably does not support color
I1213 08:44:11.981629   22716 root.go:338] Updating PATH: /home/jenkins/minikube-integration/22128-9390/.minikube/bin
I1213 08:44:11.982202   22716 config.go:182] Loaded profile config "functional-888658": Driver=kvm2, ContainerRuntime=docker, KubernetesVersion=v1.35.0-beta.0
I1213 08:44:11.982293   22716 config.go:182] Loaded profile config "functional-888658": Driver=kvm2, ContainerRuntime=docker, KubernetesVersion=v1.35.0-beta.0
I1213 08:44:11.984660   22716 ssh_runner.go:195] Run: systemctl --version
I1213 08:44:11.987249   22716 main.go:143] libmachine: domain functional-888658 has defined MAC address 52:54:00:10:a9:96 in network mk-functional-888658
I1213 08:44:11.987849   22716 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:10:a9:96", ip: ""} in network mk-functional-888658: {Iface:virbr1 ExpiryTime:2025-12-13 09:40:27 +0000 UTC Type:0 Mac:52:54:00:10:a9:96 Iaid: IPaddr:192.168.39.29 Prefix:24 Hostname:functional-888658 Clientid:01:52:54:00:10:a9:96}
I1213 08:44:11.987877   22716 main.go:143] libmachine: domain functional-888658 has defined IP address 192.168.39.29 and MAC address 52:54:00:10:a9:96 in network mk-functional-888658
I1213 08:44:11.988091   22716 sshutil.go:53] new ssh client: &{IP:192.168.39.29 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/22128-9390/.minikube/machines/functional-888658/id_rsa Username:docker}
I1213 08:44:12.074550   22716 ssh_runner.go:195] Run: docker images --no-trunc --format "{{json .}}"
--- PASS: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ImageCommands/ImageListJson (0.20s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ImageCommands/ImageListYaml (0.22s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ImageCommands/ImageListYaml
=== PAUSE TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ImageCommands/ImageListYaml

                                                
                                                

                                                
                                                
=== CONT  TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ImageCommands/ImageListYaml
functional_test.go:276: (dbg) Run:  out/minikube-linux-amd64 -p functional-888658 image ls --format yaml --alsologtostderr
functional_test.go:281: (dbg) Stdout: out/minikube-linux-amd64 -p functional-888658 image ls --format yaml --alsologtostderr:
- id: 56cc512116c8f894f11ce1995460aef1ee0972d48bc2a3bdb1faaac7c020289c
repoDigests: []
repoTags:
- gcr.io/k8s-minikube/busybox:1.28.4-glibc
size: "4400000"
- id: a236f84b9d5d27fe4bf2bab07501cccdc8e16bb38a41f83e245216bbd2b61b5c
repoDigests: []
repoTags:
- public.ecr.aws/nginx/nginx:alpine
size: "53700000"
- id: aa9d02839d8def718798bd410c88aba69248b26a8f0e3af2c728b512b67cb52b
repoDigests: []
repoTags:
- registry.k8s.io/kube-apiserver:v1.35.0-beta.0
size: "89700000"
- id: 9056ab77afb8e18e04303f11000a9d31b3f16b74c59475b899ae1b342d328d30
repoDigests: []
repoTags:
- docker.io/kicbase/echo-server:functional-888658
- docker.io/kicbase/echo-server:latest
size: "4940000"
- id: da86e6ba6ca197bf6bc5e9d900febd906b133eaa4750e6bed647b0fbe50ed43e
repoDigests: []
repoTags:
- registry.k8s.io/pause:3.1
size: "742000"
- id: f6afca449e011ac4564696c2c2f9d78931c875dc1482518037b86a6444783193
repoDigests: []
repoTags:
- docker.io/library/minikube-local-cache-test:functional-888658
size: "30"
- id: 45f3cc72d235f1cfda3de70fe9b2b9d3b356091e491b915f9efd6f0d6e5253bc
repoDigests: []
repoTags:
- registry.k8s.io/kube-controller-manager:v1.35.0-beta.0
size: "75800000"
- id: a3e246e9556e93d71e2850085ba581b376c76a9187b4b8a01c120f86579ef2b1
repoDigests: []
repoTags:
- registry.k8s.io/etcd:3.6.5-0
size: "62500000"
- id: cd073f4c5f6a8e9dc6f3125ba00cf60819cae95c1ec84a1f146ee4a9cf9e803f
repoDigests: []
repoTags:
- registry.k8s.io/pause:3.10.1
size: "736000"
- id: 0184c1613d92931126feb4c548e5da11015513b9e4c104e7305ee8b53b50a9da
repoDigests: []
repoTags:
- registry.k8s.io/pause:3.3
size: "683000"
- id: 55fb9edcb598c284a20a50968b08dc03ca7e2c4266ab455af16f52bddd43b8d3
repoDigests: []
repoTags:
- localhost/my-image:functional-888658
size: "1240000"
- id: 7bb6219ddab95bdabbef83f051bee4fdd14b6f791aaa3121080cb2c58ada2e46
repoDigests: []
repoTags:
- registry.k8s.io/kube-scheduler:v1.35.0-beta.0
size: "51700000"
- id: 8a4ded35a3eb1a80eb49c1a887194460a56b413eed7eb69e59605daf4ec23810
repoDigests: []
repoTags:
- registry.k8s.io/kube-proxy:v1.35.0-beta.0
size: "70700000"
- id: aa5e3ebc0dfed0566805186b9e47110d8f9122291d8bad1497e78873ad291139
repoDigests: []
repoTags:
- registry.k8s.io/coredns/coredns:v1.13.1
size: "78100000"
- id: 6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562
repoDigests: []
repoTags:
- gcr.io/k8s-minikube/storage-provisioner:v5
size: "31500000"
- id: 350b164e7ae1dcddeffadd65c76226c9b6dc5553f5179153fb0e36b78f2a5e06
repoDigests: []
repoTags:
- registry.k8s.io/pause:latest
size: "240000"

                                                
                                                
functional_test.go:284: (dbg) Stderr: out/minikube-linux-amd64 -p functional-888658 image ls --format yaml --alsologtostderr:
I1213 08:44:11.763306   22706 out.go:360] Setting OutFile to fd 1 ...
I1213 08:44:11.763472   22706 out.go:408] TERM=,COLORTERM=, which probably does not support color
I1213 08:44:11.763485   22706 out.go:374] Setting ErrFile to fd 2...
I1213 08:44:11.763491   22706 out.go:408] TERM=,COLORTERM=, which probably does not support color
I1213 08:44:11.763827   22706 root.go:338] Updating PATH: /home/jenkins/minikube-integration/22128-9390/.minikube/bin
I1213 08:44:11.764681   22706 config.go:182] Loaded profile config "functional-888658": Driver=kvm2, ContainerRuntime=docker, KubernetesVersion=v1.35.0-beta.0
I1213 08:44:11.764840   22706 config.go:182] Loaded profile config "functional-888658": Driver=kvm2, ContainerRuntime=docker, KubernetesVersion=v1.35.0-beta.0
I1213 08:44:11.767548   22706 ssh_runner.go:195] Run: systemctl --version
I1213 08:44:11.770160   22706 main.go:143] libmachine: domain functional-888658 has defined MAC address 52:54:00:10:a9:96 in network mk-functional-888658
I1213 08:44:11.770685   22706 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:10:a9:96", ip: ""} in network mk-functional-888658: {Iface:virbr1 ExpiryTime:2025-12-13 09:40:27 +0000 UTC Type:0 Mac:52:54:00:10:a9:96 Iaid: IPaddr:192.168.39.29 Prefix:24 Hostname:functional-888658 Clientid:01:52:54:00:10:a9:96}
I1213 08:44:11.770729   22706 main.go:143] libmachine: domain functional-888658 has defined IP address 192.168.39.29 and MAC address 52:54:00:10:a9:96 in network mk-functional-888658
I1213 08:44:11.770992   22706 sshutil.go:53] new ssh client: &{IP:192.168.39.29 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/22128-9390/.minikube/machines/functional-888658/id_rsa Username:docker}
I1213 08:44:11.866825   22706 ssh_runner.go:195] Run: docker images --no-trunc --format "{{json .}}"
--- PASS: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ImageCommands/ImageListYaml (0.22s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ImageCommands/ImageBuild (4.32s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ImageCommands/ImageBuild
=== PAUSE TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ImageCommands/ImageBuild

                                                
                                                

                                                
                                                
=== CONT  TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ImageCommands/ImageBuild
functional_test.go:323: (dbg) Run:  out/minikube-linux-amd64 -p functional-888658 ssh pgrep buildkitd
functional_test.go:323: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-888658 ssh pgrep buildkitd: exit status 1 (210.001226ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
functional_test.go:330: (dbg) Run:  out/minikube-linux-amd64 -p functional-888658 image build -t localhost/my-image:functional-888658 testdata/build --alsologtostderr
functional_test.go:330: (dbg) Done: out/minikube-linux-amd64 -p functional-888658 image build -t localhost/my-image:functional-888658 testdata/build --alsologtostderr: (3.900207999s)
functional_test.go:338: (dbg) Stderr: out/minikube-linux-amd64 -p functional-888658 image build -t localhost/my-image:functional-888658 testdata/build --alsologtostderr:
I1213 08:44:07.653845   22660 out.go:360] Setting OutFile to fd 1 ...
I1213 08:44:07.654109   22660 out.go:408] TERM=,COLORTERM=, which probably does not support color
I1213 08:44:07.654120   22660 out.go:374] Setting ErrFile to fd 2...
I1213 08:44:07.654124   22660 out.go:408] TERM=,COLORTERM=, which probably does not support color
I1213 08:44:07.654399   22660 root.go:338] Updating PATH: /home/jenkins/minikube-integration/22128-9390/.minikube/bin
I1213 08:44:07.655014   22660 config.go:182] Loaded profile config "functional-888658": Driver=kvm2, ContainerRuntime=docker, KubernetesVersion=v1.35.0-beta.0
I1213 08:44:07.655690   22660 config.go:182] Loaded profile config "functional-888658": Driver=kvm2, ContainerRuntime=docker, KubernetesVersion=v1.35.0-beta.0
I1213 08:44:07.657949   22660 ssh_runner.go:195] Run: systemctl --version
I1213 08:44:07.660796   22660 main.go:143] libmachine: domain functional-888658 has defined MAC address 52:54:00:10:a9:96 in network mk-functional-888658
I1213 08:44:07.661406   22660 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:10:a9:96", ip: ""} in network mk-functional-888658: {Iface:virbr1 ExpiryTime:2025-12-13 09:40:27 +0000 UTC Type:0 Mac:52:54:00:10:a9:96 Iaid: IPaddr:192.168.39.29 Prefix:24 Hostname:functional-888658 Clientid:01:52:54:00:10:a9:96}
I1213 08:44:07.661442   22660 main.go:143] libmachine: domain functional-888658 has defined IP address 192.168.39.29 and MAC address 52:54:00:10:a9:96 in network mk-functional-888658
I1213 08:44:07.661638   22660 sshutil.go:53] new ssh client: &{IP:192.168.39.29 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/22128-9390/.minikube/machines/functional-888658/id_rsa Username:docker}
I1213 08:44:07.762176   22660 build_images.go:162] Building image from path: /tmp/build.3980970837.tar
I1213 08:44:07.762241   22660 ssh_runner.go:195] Run: sudo mkdir -p /var/lib/minikube/build
I1213 08:44:07.784752   22660 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/build/build.3980970837.tar
I1213 08:44:07.791928   22660 ssh_runner.go:352] existence check for /var/lib/minikube/build/build.3980970837.tar: stat -c "%s %y" /var/lib/minikube/build/build.3980970837.tar: Process exited with status 1
stdout:

                                                
                                                
stderr:
stat: cannot statx '/var/lib/minikube/build/build.3980970837.tar': No such file or directory
I1213 08:44:07.792002   22660 ssh_runner.go:362] scp /tmp/build.3980970837.tar --> /var/lib/minikube/build/build.3980970837.tar (3072 bytes)
I1213 08:44:07.840261   22660 ssh_runner.go:195] Run: sudo mkdir -p /var/lib/minikube/build/build.3980970837
I1213 08:44:07.855013   22660 ssh_runner.go:195] Run: sudo tar -C /var/lib/minikube/build/build.3980970837 -xf /var/lib/minikube/build/build.3980970837.tar
I1213 08:44:07.875691   22660 docker.go:361] Building image: /var/lib/minikube/build/build.3980970837
I1213 08:44:07.875797   22660 ssh_runner.go:195] Run: docker build -t localhost/my-image:functional-888658 /var/lib/minikube/build/build.3980970837
#0 building with "default" instance using docker driver

                                                
                                                
#1 [internal] load build definition from Dockerfile
#1 transferring dockerfile: 97B done
#1 DONE 0.1s

                                                
                                                
#2 [internal] load metadata for gcr.io/k8s-minikube/busybox:latest
#2 DONE 1.9s

                                                
                                                
#3 [internal] load .dockerignore
#3 transferring context: 2B done
#3 DONE 0.0s

                                                
                                                
#4 [internal] load build context
#4 transferring context: 62B 0.0s done
#4 DONE 0.1s

                                                
                                                
#5 [1/3] FROM gcr.io/k8s-minikube/busybox:latest@sha256:ca5ae90100d50772da31f3b5016209e25ad61972404e2ccd83d44f10dee7e79b
#5 resolve gcr.io/k8s-minikube/busybox:latest@sha256:ca5ae90100d50772da31f3b5016209e25ad61972404e2ccd83d44f10dee7e79b 0.0s done
#5 sha256:ca5ae90100d50772da31f3b5016209e25ad61972404e2ccd83d44f10dee7e79b 770B / 770B done
#5 sha256:62ffc2ed7554e4c6d360bce40bbcf196573dd27c4ce080641a2c59867e732dee 527B / 527B done
#5 sha256:beae173ccac6ad749f76713cf4440fe3d21d1043fe616dfbe30775815d1d0f6a 1.46kB / 1.46kB done
#5 sha256:5cc84ad355aaa64f46ea9c7bbcc319a9d808ab15088a27209c9e70ef86e5a2aa 0B / 772.79kB 0.1s
#5 sha256:5cc84ad355aaa64f46ea9c7bbcc319a9d808ab15088a27209c9e70ef86e5a2aa 772.79kB / 772.79kB 0.3s done
#5 extracting sha256:5cc84ad355aaa64f46ea9c7bbcc319a9d808ab15088a27209c9e70ef86e5a2aa
#5 extracting sha256:5cc84ad355aaa64f46ea9c7bbcc319a9d808ab15088a27209c9e70ef86e5a2aa 0.1s done
#5 DONE 0.6s

                                                
                                                
#6 [2/3] RUN true
#6 DONE 0.3s

                                                
                                                
#7 [3/3] ADD content.txt /
#7 DONE 0.1s

                                                
                                                
#8 exporting to image
#8 exporting layers 0.1s done
#8 writing image sha256:55fb9edcb598c284a20a50968b08dc03ca7e2c4266ab455af16f52bddd43b8d3 done
#8 naming to localhost/my-image:functional-888658 done
#8 DONE 0.1s
I1213 08:44:11.436585   22660 ssh_runner.go:235] Completed: docker build -t localhost/my-image:functional-888658 /var/lib/minikube/build/build.3980970837: (3.560746005s)
I1213 08:44:11.436683   22660 ssh_runner.go:195] Run: sudo rm -rf /var/lib/minikube/build/build.3980970837
I1213 08:44:11.459169   22660 ssh_runner.go:195] Run: sudo rm -f /var/lib/minikube/build/build.3980970837.tar
I1213 08:44:11.480352   22660 build_images.go:218] Built localhost/my-image:functional-888658 from /tmp/build.3980970837.tar
I1213 08:44:11.480392   22660 build_images.go:134] succeeded building to: functional-888658
I1213 08:44:11.480399   22660 build_images.go:135] failed building to: 
functional_test.go:466: (dbg) Run:  out/minikube-linux-amd64 -p functional-888658 image ls
--- PASS: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ImageCommands/ImageBuild (4.32s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ImageCommands/Setup (0.7s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ImageCommands/Setup
functional_test.go:357: (dbg) Run:  docker pull kicbase/echo-server:1.0
functional_test.go:362: (dbg) Run:  docker tag kicbase/echo-server:1.0 kicbase/echo-server:functional-888658
--- PASS: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ImageCommands/Setup (0.70s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ImageCommands/ImageLoadDaemon (1.7s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ImageCommands/ImageLoadDaemon
functional_test.go:370: (dbg) Run:  out/minikube-linux-amd64 -p functional-888658 image load --daemon kicbase/echo-server:functional-888658 --alsologtostderr
functional_test.go:370: (dbg) Done: out/minikube-linux-amd64 -p functional-888658 image load --daemon kicbase/echo-server:functional-888658 --alsologtostderr: (1.316197319s)
functional_test.go:466: (dbg) Run:  out/minikube-linux-amd64 -p functional-888658 image ls
--- PASS: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ImageCommands/ImageLoadDaemon (1.70s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ImageCommands/ImageReloadDaemon (0.83s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ImageCommands/ImageReloadDaemon
functional_test.go:380: (dbg) Run:  out/minikube-linux-amd64 -p functional-888658 image load --daemon kicbase/echo-server:functional-888658 --alsologtostderr
functional_test.go:466: (dbg) Run:  out/minikube-linux-amd64 -p functional-888658 image ls
--- PASS: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ImageCommands/ImageReloadDaemon (0.83s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ImageCommands/ImageTagAndLoadDaemon (1.59s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ImageCommands/ImageTagAndLoadDaemon
functional_test.go:250: (dbg) Run:  docker pull kicbase/echo-server:latest
functional_test.go:255: (dbg) Run:  docker tag kicbase/echo-server:latest kicbase/echo-server:functional-888658
functional_test.go:260: (dbg) Run:  out/minikube-linux-amd64 -p functional-888658 image load --daemon kicbase/echo-server:functional-888658 --alsologtostderr
functional_test.go:466: (dbg) Run:  out/minikube-linux-amd64 -p functional-888658 image ls
--- PASS: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ImageCommands/ImageTagAndLoadDaemon (1.59s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ImageCommands/ImageSaveToFile (0.67s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ImageCommands/ImageSaveToFile
functional_test.go:395: (dbg) Run:  out/minikube-linux-amd64 -p functional-888658 image save kicbase/echo-server:functional-888658 /home/jenkins/workspace/KVM_Linux_integration/echo-server-save.tar --alsologtostderr
--- PASS: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ImageCommands/ImageSaveToFile (0.67s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ImageCommands/ImageRemove (0.66s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ImageCommands/ImageRemove
functional_test.go:407: (dbg) Run:  out/minikube-linux-amd64 -p functional-888658 image rm kicbase/echo-server:functional-888658 --alsologtostderr
functional_test.go:466: (dbg) Run:  out/minikube-linux-amd64 -p functional-888658 image ls
--- PASS: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ImageCommands/ImageRemove (0.66s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ImageCommands/ImageLoadFromFile (0.77s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ImageCommands/ImageLoadFromFile
functional_test.go:424: (dbg) Run:  out/minikube-linux-amd64 -p functional-888658 image load /home/jenkins/workspace/KVM_Linux_integration/echo-server-save.tar --alsologtostderr
functional_test.go:466: (dbg) Run:  out/minikube-linux-amd64 -p functional-888658 image ls
--- PASS: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ImageCommands/ImageLoadFromFile (0.77s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ImageCommands/ImageSaveDaemon (0.5s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ImageCommands/ImageSaveDaemon
functional_test.go:434: (dbg) Run:  docker rmi kicbase/echo-server:functional-888658
functional_test.go:439: (dbg) Run:  out/minikube-linux-amd64 -p functional-888658 image save --daemon kicbase/echo-server:functional-888658 --alsologtostderr
functional_test.go:447: (dbg) Run:  docker image inspect kicbase/echo-server:functional-888658
--- PASS: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ImageCommands/ImageSaveDaemon (0.50s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/DockerEnv/bash (0.79s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/DockerEnv/bash
functional_test.go:514: (dbg) Run:  /bin/bash -c "eval $(out/minikube-linux-amd64 -p functional-888658 docker-env) && out/minikube-linux-amd64 status -p functional-888658"
functional_test.go:537: (dbg) Run:  /bin/bash -c "eval $(out/minikube-linux-amd64 -p functional-888658 docker-env) && docker images"
--- PASS: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/DockerEnv/bash (0.79s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/UpdateContextCmd/no_changes (0.07s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/UpdateContextCmd/no_changes
=== PAUSE TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/UpdateContextCmd/no_changes

                                                
                                                

                                                
                                                
=== CONT  TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/UpdateContextCmd/no_changes
functional_test.go:2124: (dbg) Run:  out/minikube-linux-amd64 -p functional-888658 update-context --alsologtostderr -v=2
--- PASS: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/UpdateContextCmd/no_changes (0.07s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/UpdateContextCmd/no_minikube_cluster (0.07s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/UpdateContextCmd/no_minikube_cluster
=== PAUSE TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/UpdateContextCmd/no_minikube_cluster

                                                
                                                

                                                
                                                
=== CONT  TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/UpdateContextCmd/no_minikube_cluster
functional_test.go:2124: (dbg) Run:  out/minikube-linux-amd64 -p functional-888658 update-context --alsologtostderr -v=2
--- PASS: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/UpdateContextCmd/no_minikube_cluster (0.07s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/UpdateContextCmd/no_clusters (0.08s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/UpdateContextCmd/no_clusters
=== PAUSE TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/UpdateContextCmd/no_clusters

                                                
                                                

                                                
                                                
=== CONT  TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/UpdateContextCmd/no_clusters
functional_test.go:2124: (dbg) Run:  out/minikube-linux-amd64 -p functional-888658 update-context --alsologtostderr -v=2
--- PASS: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/UpdateContextCmd/no_clusters (0.08s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/delete_echo-server_images (0.04s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/delete_echo-server_images
functional_test.go:205: (dbg) Run:  docker rmi -f kicbase/echo-server:1.0
functional_test.go:205: (dbg) Run:  docker rmi -f kicbase/echo-server:functional-888658
--- PASS: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/delete_echo-server_images (0.04s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/delete_my-image_image (0.02s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/delete_my-image_image
functional_test.go:213: (dbg) Run:  docker rmi -f localhost/my-image:functional-888658
--- PASS: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/delete_my-image_image (0.02s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/delete_minikube_cached_images (0.02s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/delete_minikube_cached_images
functional_test.go:221: (dbg) Run:  docker rmi -f minikube-local-cache-test:functional-888658
--- PASS: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/delete_minikube_cached_images (0.02s)

                                                
                                    
x
+
TestGvisorAddon (183.43s)

                                                
                                                
=== RUN   TestGvisorAddon
=== PAUSE TestGvisorAddon

                                                
                                                

                                                
                                                
=== CONT  TestGvisorAddon
gvisor_addon_test.go:52: (dbg) Run:  out/minikube-linux-amd64 start -p gvisor-751442 --memory=3072 --container-runtime=containerd --docker-opt containerd=/var/run/containerd/containerd.sock --driver=kvm2 
gvisor_addon_test.go:52: (dbg) Done: out/minikube-linux-amd64 start -p gvisor-751442 --memory=3072 --container-runtime=containerd --docker-opt containerd=/var/run/containerd/containerd.sock --driver=kvm2 : (1m9.74595571s)
gvisor_addon_test.go:58: (dbg) Run:  out/minikube-linux-amd64 -p gvisor-751442 cache add gcr.io/k8s-minikube/gvisor-addon:2
gvisor_addon_test.go:58: (dbg) Done: out/minikube-linux-amd64 -p gvisor-751442 cache add gcr.io/k8s-minikube/gvisor-addon:2: (4.678755551s)
gvisor_addon_test.go:63: (dbg) Run:  out/minikube-linux-amd64 -p gvisor-751442 addons enable gvisor
E1213 09:27:37.332477   13307 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22128-9390/.minikube/profiles/addons-527167/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
gvisor_addon_test.go:63: (dbg) Done: out/minikube-linux-amd64 -p gvisor-751442 addons enable gvisor: (4.613471302s)
gvisor_addon_test.go:68: (dbg) TestGvisorAddon: waiting 4m0s for pods matching "kubernetes.io/minikube-addons=gvisor" in namespace "kube-system" ...
helpers_test.go:353: "gvisor" [c178e118-225a-4704-9387-529577e7838c] Running
gvisor_addon_test.go:68: (dbg) TestGvisorAddon: kubernetes.io/minikube-addons=gvisor healthy within 6.00457325s
gvisor_addon_test.go:73: (dbg) Run:  kubectl --context gvisor-751442 replace --force -f testdata/nginx-gvisor.yaml
gvisor_addon_test.go:78: (dbg) TestGvisorAddon: waiting 4m0s for pods matching "run=nginx,runtime=gvisor" in namespace "default" ...
helpers_test.go:353: "nginx-gvisor" [8d9624e4-60ed-45b2-8cd5-67102f8a47f3] Pending
helpers_test.go:353: "nginx-gvisor" [8d9624e4-60ed-45b2-8cd5-67102f8a47f3] Pending / Ready:ContainersNotReady (containers with unready status: [nginx]) / ContainersReady:ContainersNotReady (containers with unready status: [nginx])
E1213 09:27:57.302176   13307 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22128-9390/.minikube/profiles/skaffold-452054/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
helpers_test.go:353: "nginx-gvisor" [8d9624e4-60ed-45b2-8cd5-67102f8a47f3] Running
gvisor_addon_test.go:78: (dbg) TestGvisorAddon: run=nginx,runtime=gvisor healthy within 44.006829467s
gvisor_addon_test.go:83: (dbg) Run:  out/minikube-linux-amd64 stop -p gvisor-751442
gvisor_addon_test.go:83: (dbg) Done: out/minikube-linux-amd64 stop -p gvisor-751442: (7.758193221s)
gvisor_addon_test.go:88: (dbg) Run:  out/minikube-linux-amd64 start -p gvisor-751442 --memory=3072 --container-runtime=containerd --docker-opt containerd=/var/run/containerd/containerd.sock --driver=kvm2 
gvisor_addon_test.go:88: (dbg) Done: out/minikube-linux-amd64 start -p gvisor-751442 --memory=3072 --container-runtime=containerd --docker-opt containerd=/var/run/containerd/containerd.sock --driver=kvm2 : (34.351307845s)
gvisor_addon_test.go:92: (dbg) TestGvisorAddon: waiting 4m0s for pods matching "kubernetes.io/minikube-addons=gvisor" in namespace "kube-system" ...
helpers_test.go:353: "gvisor" [c178e118-225a-4704-9387-529577e7838c] Running / Ready:ContainersNotReady (containers with unready status: [gvisor]) / ContainersReady:ContainersNotReady (containers with unready status: [gvisor])
helpers_test.go:353: "gvisor" [c178e118-225a-4704-9387-529577e7838c] Running
gvisor_addon_test.go:92: (dbg) TestGvisorAddon: kubernetes.io/minikube-addons=gvisor healthy within 6.00524967s
gvisor_addon_test.go:95: (dbg) TestGvisorAddon: waiting 4m0s for pods matching "run=nginx,runtime=gvisor" in namespace "default" ...
helpers_test.go:353: "nginx-gvisor" [8d9624e4-60ed-45b2-8cd5-67102f8a47f3] Running / Ready:ContainersNotReady (containers with unready status: [nginx]) / ContainersReady:ContainersNotReady (containers with unready status: [nginx])
E1213 09:29:18.851904   13307 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22128-9390/.minikube/profiles/functional-016924/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1213 09:29:19.223564   13307 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22128-9390/.minikube/profiles/skaffold-452054/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
gvisor_addon_test.go:95: (dbg) TestGvisorAddon: run=nginx,runtime=gvisor healthy within 5.008459956s
helpers_test.go:176: Cleaning up "gvisor-751442" profile ...
helpers_test.go:179: (dbg) Run:  out/minikube-linux-amd64 delete -p gvisor-751442
helpers_test.go:179: (dbg) Done: out/minikube-linux-amd64 delete -p gvisor-751442: (1.08924298s)
--- PASS: TestGvisorAddon (183.43s)

                                                
                                    
x
+
TestMultiControlPlane/serial/StartCluster (267.75s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/StartCluster
ha_test.go:101: (dbg) Run:  out/minikube-linux-amd64 -p ha-493461 start --ha --memory 3072 --wait true --alsologtostderr -v 5 --driver=kvm2 
E1213 08:44:59.829360   13307 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22128-9390/.minikube/profiles/functional-016924/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1213 08:45:40.790783   13307 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22128-9390/.minikube/profiles/functional-016924/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1213 08:47:02.713262   13307 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22128-9390/.minikube/profiles/functional-016924/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1213 08:47:37.333735   13307 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22128-9390/.minikube/profiles/addons-527167/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1213 08:48:44.843882   13307 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22128-9390/.minikube/profiles/functional-888658/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1213 08:48:44.850410   13307 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22128-9390/.minikube/profiles/functional-888658/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1213 08:48:44.861946   13307 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22128-9390/.minikube/profiles/functional-888658/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1213 08:48:44.883443   13307 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22128-9390/.minikube/profiles/functional-888658/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1213 08:48:44.925055   13307 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22128-9390/.minikube/profiles/functional-888658/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1213 08:48:45.006622   13307 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22128-9390/.minikube/profiles/functional-888658/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1213 08:48:45.168269   13307 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22128-9390/.minikube/profiles/functional-888658/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1213 08:48:45.490126   13307 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22128-9390/.minikube/profiles/functional-888658/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1213 08:48:46.132252   13307 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22128-9390/.minikube/profiles/functional-888658/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1213 08:48:47.413643   13307 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22128-9390/.minikube/profiles/functional-888658/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1213 08:48:49.975973   13307 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22128-9390/.minikube/profiles/functional-888658/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1213 08:48:55.097683   13307 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22128-9390/.minikube/profiles/functional-888658/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1213 08:49:05.340031   13307 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22128-9390/.minikube/profiles/functional-888658/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
ha_test.go:101: (dbg) Done: out/minikube-linux-amd64 -p ha-493461 start --ha --memory 3072 --wait true --alsologtostderr -v 5 --driver=kvm2 : (4m27.109803963s)
ha_test.go:107: (dbg) Run:  out/minikube-linux-amd64 -p ha-493461 status --alsologtostderr -v 5
--- PASS: TestMultiControlPlane/serial/StartCluster (267.75s)

                                                
                                    
x
+
TestMultiControlPlane/serial/DeployApp (7.31s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/DeployApp
ha_test.go:128: (dbg) Run:  out/minikube-linux-amd64 -p ha-493461 kubectl -- apply -f ./testdata/ha/ha-pod-dns-test.yaml
ha_test.go:133: (dbg) Run:  out/minikube-linux-amd64 -p ha-493461 kubectl -- rollout status deployment/busybox
ha_test.go:133: (dbg) Done: out/minikube-linux-amd64 -p ha-493461 kubectl -- rollout status deployment/busybox: (4.50120622s)
ha_test.go:140: (dbg) Run:  out/minikube-linux-amd64 -p ha-493461 kubectl -- get pods -o jsonpath='{.items[*].status.podIP}'
ha_test.go:163: (dbg) Run:  out/minikube-linux-amd64 -p ha-493461 kubectl -- get pods -o jsonpath='{.items[*].metadata.name}'
ha_test.go:171: (dbg) Run:  out/minikube-linux-amd64 -p ha-493461 kubectl -- exec busybox-7b57f96db7-7dqf9 -- nslookup kubernetes.io
ha_test.go:171: (dbg) Run:  out/minikube-linux-amd64 -p ha-493461 kubectl -- exec busybox-7b57f96db7-g7jz5 -- nslookup kubernetes.io
ha_test.go:171: (dbg) Run:  out/minikube-linux-amd64 -p ha-493461 kubectl -- exec busybox-7b57f96db7-tfhvc -- nslookup kubernetes.io
ha_test.go:181: (dbg) Run:  out/minikube-linux-amd64 -p ha-493461 kubectl -- exec busybox-7b57f96db7-7dqf9 -- nslookup kubernetes.default
ha_test.go:181: (dbg) Run:  out/minikube-linux-amd64 -p ha-493461 kubectl -- exec busybox-7b57f96db7-g7jz5 -- nslookup kubernetes.default
ha_test.go:181: (dbg) Run:  out/minikube-linux-amd64 -p ha-493461 kubectl -- exec busybox-7b57f96db7-tfhvc -- nslookup kubernetes.default
ha_test.go:189: (dbg) Run:  out/minikube-linux-amd64 -p ha-493461 kubectl -- exec busybox-7b57f96db7-7dqf9 -- nslookup kubernetes.default.svc.cluster.local
E1213 08:49:18.852293   13307 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22128-9390/.minikube/profiles/functional-016924/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
ha_test.go:189: (dbg) Run:  out/minikube-linux-amd64 -p ha-493461 kubectl -- exec busybox-7b57f96db7-g7jz5 -- nslookup kubernetes.default.svc.cluster.local
ha_test.go:189: (dbg) Run:  out/minikube-linux-amd64 -p ha-493461 kubectl -- exec busybox-7b57f96db7-tfhvc -- nslookup kubernetes.default.svc.cluster.local
--- PASS: TestMultiControlPlane/serial/DeployApp (7.31s)

                                                
                                    
x
+
TestMultiControlPlane/serial/PingHostFromPods (1.55s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/PingHostFromPods
ha_test.go:199: (dbg) Run:  out/minikube-linux-amd64 -p ha-493461 kubectl -- get pods -o jsonpath='{.items[*].metadata.name}'
ha_test.go:207: (dbg) Run:  out/minikube-linux-amd64 -p ha-493461 kubectl -- exec busybox-7b57f96db7-7dqf9 -- sh -c "nslookup host.minikube.internal | awk 'NR==5' | cut -d' ' -f3"
ha_test.go:218: (dbg) Run:  out/minikube-linux-amd64 -p ha-493461 kubectl -- exec busybox-7b57f96db7-7dqf9 -- sh -c "ping -c 1 192.168.39.1"
ha_test.go:207: (dbg) Run:  out/minikube-linux-amd64 -p ha-493461 kubectl -- exec busybox-7b57f96db7-g7jz5 -- sh -c "nslookup host.minikube.internal | awk 'NR==5' | cut -d' ' -f3"
ha_test.go:218: (dbg) Run:  out/minikube-linux-amd64 -p ha-493461 kubectl -- exec busybox-7b57f96db7-g7jz5 -- sh -c "ping -c 1 192.168.39.1"
ha_test.go:207: (dbg) Run:  out/minikube-linux-amd64 -p ha-493461 kubectl -- exec busybox-7b57f96db7-tfhvc -- sh -c "nslookup host.minikube.internal | awk 'NR==5' | cut -d' ' -f3"
ha_test.go:218: (dbg) Run:  out/minikube-linux-amd64 -p ha-493461 kubectl -- exec busybox-7b57f96db7-tfhvc -- sh -c "ping -c 1 192.168.39.1"
--- PASS: TestMultiControlPlane/serial/PingHostFromPods (1.55s)

                                                
                                    
x
+
TestMultiControlPlane/serial/AddWorkerNode (52.08s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/AddWorkerNode
ha_test.go:228: (dbg) Run:  out/minikube-linux-amd64 -p ha-493461 node add --alsologtostderr -v 5
E1213 08:49:25.821555   13307 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22128-9390/.minikube/profiles/functional-888658/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1213 08:49:46.554781   13307 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22128-9390/.minikube/profiles/functional-016924/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1213 08:50:06.783707   13307 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22128-9390/.minikube/profiles/functional-888658/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
ha_test.go:228: (dbg) Done: out/minikube-linux-amd64 -p ha-493461 node add --alsologtostderr -v 5: (51.347708605s)
ha_test.go:234: (dbg) Run:  out/minikube-linux-amd64 -p ha-493461 status --alsologtostderr -v 5
--- PASS: TestMultiControlPlane/serial/AddWorkerNode (52.08s)

                                                
                                    
x
+
TestMultiControlPlane/serial/NodeLabels (0.07s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/NodeLabels
ha_test.go:255: (dbg) Run:  kubectl --context ha-493461 get nodes -o "jsonpath=[{range .items[*]}{.metadata.labels},{end}]"
--- PASS: TestMultiControlPlane/serial/NodeLabels (0.07s)

                                                
                                    
x
+
TestMultiControlPlane/serial/HAppyAfterClusterStart (0.76s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/HAppyAfterClusterStart
ha_test.go:281: (dbg) Run:  out/minikube-linux-amd64 profile list --output json
--- PASS: TestMultiControlPlane/serial/HAppyAfterClusterStart (0.76s)

                                                
                                    
x
+
TestMultiControlPlane/serial/CopyFile (11.54s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/CopyFile
ha_test.go:328: (dbg) Run:  out/minikube-linux-amd64 -p ha-493461 status --output json --alsologtostderr -v 5
helpers_test.go:574: (dbg) Run:  out/minikube-linux-amd64 -p ha-493461 cp testdata/cp-test.txt ha-493461:/home/docker/cp-test.txt
helpers_test.go:552: (dbg) Run:  out/minikube-linux-amd64 -p ha-493461 ssh -n ha-493461 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:574: (dbg) Run:  out/minikube-linux-amd64 -p ha-493461 cp ha-493461:/home/docker/cp-test.txt /tmp/TestMultiControlPlaneserialCopyFile793609284/001/cp-test_ha-493461.txt
helpers_test.go:552: (dbg) Run:  out/minikube-linux-amd64 -p ha-493461 ssh -n ha-493461 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:574: (dbg) Run:  out/minikube-linux-amd64 -p ha-493461 cp ha-493461:/home/docker/cp-test.txt ha-493461-m02:/home/docker/cp-test_ha-493461_ha-493461-m02.txt
helpers_test.go:552: (dbg) Run:  out/minikube-linux-amd64 -p ha-493461 ssh -n ha-493461 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:552: (dbg) Run:  out/minikube-linux-amd64 -p ha-493461 ssh -n ha-493461-m02 "sudo cat /home/docker/cp-test_ha-493461_ha-493461-m02.txt"
helpers_test.go:574: (dbg) Run:  out/minikube-linux-amd64 -p ha-493461 cp ha-493461:/home/docker/cp-test.txt ha-493461-m03:/home/docker/cp-test_ha-493461_ha-493461-m03.txt
helpers_test.go:552: (dbg) Run:  out/minikube-linux-amd64 -p ha-493461 ssh -n ha-493461 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:552: (dbg) Run:  out/minikube-linux-amd64 -p ha-493461 ssh -n ha-493461-m03 "sudo cat /home/docker/cp-test_ha-493461_ha-493461-m03.txt"
helpers_test.go:574: (dbg) Run:  out/minikube-linux-amd64 -p ha-493461 cp ha-493461:/home/docker/cp-test.txt ha-493461-m04:/home/docker/cp-test_ha-493461_ha-493461-m04.txt
helpers_test.go:552: (dbg) Run:  out/minikube-linux-amd64 -p ha-493461 ssh -n ha-493461 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:552: (dbg) Run:  out/minikube-linux-amd64 -p ha-493461 ssh -n ha-493461-m04 "sudo cat /home/docker/cp-test_ha-493461_ha-493461-m04.txt"
helpers_test.go:574: (dbg) Run:  out/minikube-linux-amd64 -p ha-493461 cp testdata/cp-test.txt ha-493461-m02:/home/docker/cp-test.txt
helpers_test.go:552: (dbg) Run:  out/minikube-linux-amd64 -p ha-493461 ssh -n ha-493461-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:574: (dbg) Run:  out/minikube-linux-amd64 -p ha-493461 cp ha-493461-m02:/home/docker/cp-test.txt /tmp/TestMultiControlPlaneserialCopyFile793609284/001/cp-test_ha-493461-m02.txt
helpers_test.go:552: (dbg) Run:  out/minikube-linux-amd64 -p ha-493461 ssh -n ha-493461-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:574: (dbg) Run:  out/minikube-linux-amd64 -p ha-493461 cp ha-493461-m02:/home/docker/cp-test.txt ha-493461:/home/docker/cp-test_ha-493461-m02_ha-493461.txt
helpers_test.go:552: (dbg) Run:  out/minikube-linux-amd64 -p ha-493461 ssh -n ha-493461-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:552: (dbg) Run:  out/minikube-linux-amd64 -p ha-493461 ssh -n ha-493461 "sudo cat /home/docker/cp-test_ha-493461-m02_ha-493461.txt"
helpers_test.go:574: (dbg) Run:  out/minikube-linux-amd64 -p ha-493461 cp ha-493461-m02:/home/docker/cp-test.txt ha-493461-m03:/home/docker/cp-test_ha-493461-m02_ha-493461-m03.txt
helpers_test.go:552: (dbg) Run:  out/minikube-linux-amd64 -p ha-493461 ssh -n ha-493461-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:552: (dbg) Run:  out/minikube-linux-amd64 -p ha-493461 ssh -n ha-493461-m03 "sudo cat /home/docker/cp-test_ha-493461-m02_ha-493461-m03.txt"
helpers_test.go:574: (dbg) Run:  out/minikube-linux-amd64 -p ha-493461 cp ha-493461-m02:/home/docker/cp-test.txt ha-493461-m04:/home/docker/cp-test_ha-493461-m02_ha-493461-m04.txt
helpers_test.go:552: (dbg) Run:  out/minikube-linux-amd64 -p ha-493461 ssh -n ha-493461-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:552: (dbg) Run:  out/minikube-linux-amd64 -p ha-493461 ssh -n ha-493461-m04 "sudo cat /home/docker/cp-test_ha-493461-m02_ha-493461-m04.txt"
helpers_test.go:574: (dbg) Run:  out/minikube-linux-amd64 -p ha-493461 cp testdata/cp-test.txt ha-493461-m03:/home/docker/cp-test.txt
helpers_test.go:552: (dbg) Run:  out/minikube-linux-amd64 -p ha-493461 ssh -n ha-493461-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:574: (dbg) Run:  out/minikube-linux-amd64 -p ha-493461 cp ha-493461-m03:/home/docker/cp-test.txt /tmp/TestMultiControlPlaneserialCopyFile793609284/001/cp-test_ha-493461-m03.txt
helpers_test.go:552: (dbg) Run:  out/minikube-linux-amd64 -p ha-493461 ssh -n ha-493461-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:574: (dbg) Run:  out/minikube-linux-amd64 -p ha-493461 cp ha-493461-m03:/home/docker/cp-test.txt ha-493461:/home/docker/cp-test_ha-493461-m03_ha-493461.txt
helpers_test.go:552: (dbg) Run:  out/minikube-linux-amd64 -p ha-493461 ssh -n ha-493461-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:552: (dbg) Run:  out/minikube-linux-amd64 -p ha-493461 ssh -n ha-493461 "sudo cat /home/docker/cp-test_ha-493461-m03_ha-493461.txt"
helpers_test.go:574: (dbg) Run:  out/minikube-linux-amd64 -p ha-493461 cp ha-493461-m03:/home/docker/cp-test.txt ha-493461-m02:/home/docker/cp-test_ha-493461-m03_ha-493461-m02.txt
helpers_test.go:552: (dbg) Run:  out/minikube-linux-amd64 -p ha-493461 ssh -n ha-493461-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:552: (dbg) Run:  out/minikube-linux-amd64 -p ha-493461 ssh -n ha-493461-m02 "sudo cat /home/docker/cp-test_ha-493461-m03_ha-493461-m02.txt"
helpers_test.go:574: (dbg) Run:  out/minikube-linux-amd64 -p ha-493461 cp ha-493461-m03:/home/docker/cp-test.txt ha-493461-m04:/home/docker/cp-test_ha-493461-m03_ha-493461-m04.txt
helpers_test.go:552: (dbg) Run:  out/minikube-linux-amd64 -p ha-493461 ssh -n ha-493461-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:552: (dbg) Run:  out/minikube-linux-amd64 -p ha-493461 ssh -n ha-493461-m04 "sudo cat /home/docker/cp-test_ha-493461-m03_ha-493461-m04.txt"
helpers_test.go:574: (dbg) Run:  out/minikube-linux-amd64 -p ha-493461 cp testdata/cp-test.txt ha-493461-m04:/home/docker/cp-test.txt
helpers_test.go:552: (dbg) Run:  out/minikube-linux-amd64 -p ha-493461 ssh -n ha-493461-m04 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:574: (dbg) Run:  out/minikube-linux-amd64 -p ha-493461 cp ha-493461-m04:/home/docker/cp-test.txt /tmp/TestMultiControlPlaneserialCopyFile793609284/001/cp-test_ha-493461-m04.txt
helpers_test.go:552: (dbg) Run:  out/minikube-linux-amd64 -p ha-493461 ssh -n ha-493461-m04 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:574: (dbg) Run:  out/minikube-linux-amd64 -p ha-493461 cp ha-493461-m04:/home/docker/cp-test.txt ha-493461:/home/docker/cp-test_ha-493461-m04_ha-493461.txt
helpers_test.go:552: (dbg) Run:  out/minikube-linux-amd64 -p ha-493461 ssh -n ha-493461-m04 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:552: (dbg) Run:  out/minikube-linux-amd64 -p ha-493461 ssh -n ha-493461 "sudo cat /home/docker/cp-test_ha-493461-m04_ha-493461.txt"
helpers_test.go:574: (dbg) Run:  out/minikube-linux-amd64 -p ha-493461 cp ha-493461-m04:/home/docker/cp-test.txt ha-493461-m02:/home/docker/cp-test_ha-493461-m04_ha-493461-m02.txt
helpers_test.go:552: (dbg) Run:  out/minikube-linux-amd64 -p ha-493461 ssh -n ha-493461-m04 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:552: (dbg) Run:  out/minikube-linux-amd64 -p ha-493461 ssh -n ha-493461-m02 "sudo cat /home/docker/cp-test_ha-493461-m04_ha-493461-m02.txt"
helpers_test.go:574: (dbg) Run:  out/minikube-linux-amd64 -p ha-493461 cp ha-493461-m04:/home/docker/cp-test.txt ha-493461-m03:/home/docker/cp-test_ha-493461-m04_ha-493461-m03.txt
helpers_test.go:552: (dbg) Run:  out/minikube-linux-amd64 -p ha-493461 ssh -n ha-493461-m04 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:552: (dbg) Run:  out/minikube-linux-amd64 -p ha-493461 ssh -n ha-493461-m03 "sudo cat /home/docker/cp-test_ha-493461-m04_ha-493461-m03.txt"
--- PASS: TestMultiControlPlane/serial/CopyFile (11.54s)

                                                
                                    
x
+
TestMultiControlPlane/serial/StopSecondaryNode (15.47s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/StopSecondaryNode
ha_test.go:365: (dbg) Run:  out/minikube-linux-amd64 -p ha-493461 node stop m02 --alsologtostderr -v 5
ha_test.go:365: (dbg) Done: out/minikube-linux-amd64 -p ha-493461 node stop m02 --alsologtostderr -v 5: (14.914269051s)
ha_test.go:371: (dbg) Run:  out/minikube-linux-amd64 -p ha-493461 status --alsologtostderr -v 5
ha_test.go:371: (dbg) Non-zero exit: out/minikube-linux-amd64 -p ha-493461 status --alsologtostderr -v 5: exit status 7 (551.127984ms)

                                                
                                                
-- stdout --
	ha-493461
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Running
	kubeconfig: Configured
	
	ha-493461-m02
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	
	ha-493461-m03
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Running
	kubeconfig: Configured
	
	ha-493461-m04
	type: Worker
	host: Running
	kubelet: Running
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1213 08:50:40.459260   25895 out.go:360] Setting OutFile to fd 1 ...
	I1213 08:50:40.459550   25895 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1213 08:50:40.459562   25895 out.go:374] Setting ErrFile to fd 2...
	I1213 08:50:40.459566   25895 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1213 08:50:40.459754   25895 root.go:338] Updating PATH: /home/jenkins/minikube-integration/22128-9390/.minikube/bin
	I1213 08:50:40.459913   25895 out.go:368] Setting JSON to false
	I1213 08:50:40.459944   25895 mustload.go:66] Loading cluster: ha-493461
	I1213 08:50:40.460013   25895 notify.go:221] Checking for updates...
	I1213 08:50:40.460295   25895 config.go:182] Loaded profile config "ha-493461": Driver=kvm2, ContainerRuntime=docker, KubernetesVersion=v1.34.2
	I1213 08:50:40.460311   25895 status.go:174] checking status of ha-493461 ...
	I1213 08:50:40.463088   25895 status.go:371] ha-493461 host status = "Running" (err=<nil>)
	I1213 08:50:40.463115   25895 host.go:66] Checking if "ha-493461" exists ...
	I1213 08:50:40.466335   25895 main.go:143] libmachine: domain ha-493461 has defined MAC address 52:54:00:dd:87:ce in network mk-ha-493461
	I1213 08:50:40.466943   25895 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:dd:87:ce", ip: ""} in network mk-ha-493461: {Iface:virbr1 ExpiryTime:2025-12-13 09:45:00 +0000 UTC Type:0 Mac:52:54:00:dd:87:ce Iaid: IPaddr:192.168.39.108 Prefix:24 Hostname:ha-493461 Clientid:01:52:54:00:dd:87:ce}
	I1213 08:50:40.466980   25895 main.go:143] libmachine: domain ha-493461 has defined IP address 192.168.39.108 and MAC address 52:54:00:dd:87:ce in network mk-ha-493461
	I1213 08:50:40.467185   25895 host.go:66] Checking if "ha-493461" exists ...
	I1213 08:50:40.467462   25895 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1213 08:50:40.471039   25895 main.go:143] libmachine: domain ha-493461 has defined MAC address 52:54:00:dd:87:ce in network mk-ha-493461
	I1213 08:50:40.471714   25895 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:dd:87:ce", ip: ""} in network mk-ha-493461: {Iface:virbr1 ExpiryTime:2025-12-13 09:45:00 +0000 UTC Type:0 Mac:52:54:00:dd:87:ce Iaid: IPaddr:192.168.39.108 Prefix:24 Hostname:ha-493461 Clientid:01:52:54:00:dd:87:ce}
	I1213 08:50:40.471747   25895 main.go:143] libmachine: domain ha-493461 has defined IP address 192.168.39.108 and MAC address 52:54:00:dd:87:ce in network mk-ha-493461
	I1213 08:50:40.472026   25895 sshutil.go:53] new ssh client: &{IP:192.168.39.108 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/22128-9390/.minikube/machines/ha-493461/id_rsa Username:docker}
	I1213 08:50:40.563665   25895 ssh_runner.go:195] Run: systemctl --version
	I1213 08:50:40.571099   25895 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1213 08:50:40.592952   25895 kubeconfig.go:125] found "ha-493461" server: "https://192.168.39.254:8443"
	I1213 08:50:40.592995   25895 api_server.go:166] Checking apiserver status ...
	I1213 08:50:40.593040   25895 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 08:50:40.620537   25895 ssh_runner.go:195] Run: sudo egrep ^[0-9]+:freezer: /proc/2578/cgroup
	W1213 08:50:40.638937   25895 api_server.go:177] unable to find freezer cgroup: sudo egrep ^[0-9]+:freezer: /proc/2578/cgroup: Process exited with status 1
	stdout:
	
	stderr:
	I1213 08:50:40.639003   25895 ssh_runner.go:195] Run: ls
	I1213 08:50:40.645208   25895 api_server.go:253] Checking apiserver healthz at https://192.168.39.254:8443/healthz ...
	I1213 08:50:40.651553   25895 api_server.go:279] https://192.168.39.254:8443/healthz returned 200:
	ok
	I1213 08:50:40.651586   25895 status.go:463] ha-493461 apiserver status = Running (err=<nil>)
	I1213 08:50:40.651597   25895 status.go:176] ha-493461 status: &{Name:ha-493461 Host:Running Kubelet:Running APIServer:Running Kubeconfig:Configured Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I1213 08:50:40.651617   25895 status.go:174] checking status of ha-493461-m02 ...
	I1213 08:50:40.653566   25895 status.go:371] ha-493461-m02 host status = "Stopped" (err=<nil>)
	I1213 08:50:40.653590   25895 status.go:384] host is not running, skipping remaining checks
	I1213 08:50:40.653598   25895 status.go:176] ha-493461-m02 status: &{Name:ha-493461-m02 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I1213 08:50:40.653620   25895 status.go:174] checking status of ha-493461-m03 ...
	I1213 08:50:40.655182   25895 status.go:371] ha-493461-m03 host status = "Running" (err=<nil>)
	I1213 08:50:40.655205   25895 host.go:66] Checking if "ha-493461-m03" exists ...
	I1213 08:50:40.657956   25895 main.go:143] libmachine: domain ha-493461-m03 has defined MAC address 52:54:00:51:38:26 in network mk-ha-493461
	I1213 08:50:40.658668   25895 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:51:38:26", ip: ""} in network mk-ha-493461: {Iface:virbr1 ExpiryTime:2025-12-13 09:47:21 +0000 UTC Type:0 Mac:52:54:00:51:38:26 Iaid: IPaddr:192.168.39.251 Prefix:24 Hostname:ha-493461-m03 Clientid:01:52:54:00:51:38:26}
	I1213 08:50:40.658702   25895 main.go:143] libmachine: domain ha-493461-m03 has defined IP address 192.168.39.251 and MAC address 52:54:00:51:38:26 in network mk-ha-493461
	I1213 08:50:40.659008   25895 host.go:66] Checking if "ha-493461-m03" exists ...
	I1213 08:50:40.659277   25895 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1213 08:50:40.661709   25895 main.go:143] libmachine: domain ha-493461-m03 has defined MAC address 52:54:00:51:38:26 in network mk-ha-493461
	I1213 08:50:40.662175   25895 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:51:38:26", ip: ""} in network mk-ha-493461: {Iface:virbr1 ExpiryTime:2025-12-13 09:47:21 +0000 UTC Type:0 Mac:52:54:00:51:38:26 Iaid: IPaddr:192.168.39.251 Prefix:24 Hostname:ha-493461-m03 Clientid:01:52:54:00:51:38:26}
	I1213 08:50:40.662200   25895 main.go:143] libmachine: domain ha-493461-m03 has defined IP address 192.168.39.251 and MAC address 52:54:00:51:38:26 in network mk-ha-493461
	I1213 08:50:40.662366   25895 sshutil.go:53] new ssh client: &{IP:192.168.39.251 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/22128-9390/.minikube/machines/ha-493461-m03/id_rsa Username:docker}
	I1213 08:50:40.749854   25895 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1213 08:50:40.770161   25895 kubeconfig.go:125] found "ha-493461" server: "https://192.168.39.254:8443"
	I1213 08:50:40.770196   25895 api_server.go:166] Checking apiserver status ...
	I1213 08:50:40.770232   25895 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 08:50:40.791805   25895 ssh_runner.go:195] Run: sudo egrep ^[0-9]+:freezer: /proc/2394/cgroup
	W1213 08:50:40.806670   25895 api_server.go:177] unable to find freezer cgroup: sudo egrep ^[0-9]+:freezer: /proc/2394/cgroup: Process exited with status 1
	stdout:
	
	stderr:
	I1213 08:50:40.806749   25895 ssh_runner.go:195] Run: ls
	I1213 08:50:40.813122   25895 api_server.go:253] Checking apiserver healthz at https://192.168.39.254:8443/healthz ...
	I1213 08:50:40.819515   25895 api_server.go:279] https://192.168.39.254:8443/healthz returned 200:
	ok
	I1213 08:50:40.819541   25895 status.go:463] ha-493461-m03 apiserver status = Running (err=<nil>)
	I1213 08:50:40.819549   25895 status.go:176] ha-493461-m03 status: &{Name:ha-493461-m03 Host:Running Kubelet:Running APIServer:Running Kubeconfig:Configured Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I1213 08:50:40.819564   25895 status.go:174] checking status of ha-493461-m04 ...
	I1213 08:50:40.821630   25895 status.go:371] ha-493461-m04 host status = "Running" (err=<nil>)
	I1213 08:50:40.821654   25895 host.go:66] Checking if "ha-493461-m04" exists ...
	I1213 08:50:40.825406   25895 main.go:143] libmachine: domain ha-493461-m04 has defined MAC address 52:54:00:15:bf:87 in network mk-ha-493461
	I1213 08:50:40.825983   25895 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:15:bf:87", ip: ""} in network mk-ha-493461: {Iface:virbr1 ExpiryTime:2025-12-13 09:49:38 +0000 UTC Type:0 Mac:52:54:00:15:bf:87 Iaid: IPaddr:192.168.39.177 Prefix:24 Hostname:ha-493461-m04 Clientid:01:52:54:00:15:bf:87}
	I1213 08:50:40.826021   25895 main.go:143] libmachine: domain ha-493461-m04 has defined IP address 192.168.39.177 and MAC address 52:54:00:15:bf:87 in network mk-ha-493461
	I1213 08:50:40.826254   25895 host.go:66] Checking if "ha-493461-m04" exists ...
	I1213 08:50:40.826639   25895 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1213 08:50:40.829279   25895 main.go:143] libmachine: domain ha-493461-m04 has defined MAC address 52:54:00:15:bf:87 in network mk-ha-493461
	I1213 08:50:40.829899   25895 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:15:bf:87", ip: ""} in network mk-ha-493461: {Iface:virbr1 ExpiryTime:2025-12-13 09:49:38 +0000 UTC Type:0 Mac:52:54:00:15:bf:87 Iaid: IPaddr:192.168.39.177 Prefix:24 Hostname:ha-493461-m04 Clientid:01:52:54:00:15:bf:87}
	I1213 08:50:40.829939   25895 main.go:143] libmachine: domain ha-493461-m04 has defined IP address 192.168.39.177 and MAC address 52:54:00:15:bf:87 in network mk-ha-493461
	I1213 08:50:40.830102   25895 sshutil.go:53] new ssh client: &{IP:192.168.39.177 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/22128-9390/.minikube/machines/ha-493461-m04/id_rsa Username:docker}
	I1213 08:50:40.916857   25895 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1213 08:50:40.944950   25895 status.go:176] ha-493461-m04 status: &{Name:ha-493461-m04 Host:Running Kubelet:Running APIServer:Irrelevant Kubeconfig:Irrelevant Worker:true TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
--- PASS: TestMultiControlPlane/serial/StopSecondaryNode (15.47s)

                                                
                                    
x
+
TestMultiControlPlane/serial/DegradedAfterControlPlaneNodeStop (0.57s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/DegradedAfterControlPlaneNodeStop
ha_test.go:392: (dbg) Run:  out/minikube-linux-amd64 profile list --output json
--- PASS: TestMultiControlPlane/serial/DegradedAfterControlPlaneNodeStop (0.57s)

                                                
                                    
x
+
TestMultiControlPlane/serial/RestartSecondaryNode (31.88s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/RestartSecondaryNode
ha_test.go:422: (dbg) Run:  out/minikube-linux-amd64 -p ha-493461 node start m02 --alsologtostderr -v 5
ha_test.go:422: (dbg) Done: out/minikube-linux-amd64 -p ha-493461 node start m02 --alsologtostderr -v 5: (30.883291737s)
ha_test.go:430: (dbg) Run:  out/minikube-linux-amd64 -p ha-493461 status --alsologtostderr -v 5
ha_test.go:450: (dbg) Run:  kubectl get nodes
--- PASS: TestMultiControlPlane/serial/RestartSecondaryNode (31.88s)

                                                
                                    
x
+
TestMultiControlPlane/serial/HAppyAfterSecondaryNodeRestart (0.86s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/HAppyAfterSecondaryNodeRestart
ha_test.go:281: (dbg) Run:  out/minikube-linux-amd64 profile list --output json
--- PASS: TestMultiControlPlane/serial/HAppyAfterSecondaryNodeRestart (0.86s)

                                                
                                    
x
+
TestMultiControlPlane/serial/RestartClusterKeepsNodes (166.67s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/RestartClusterKeepsNodes
ha_test.go:458: (dbg) Run:  out/minikube-linux-amd64 -p ha-493461 node list --alsologtostderr -v 5
ha_test.go:464: (dbg) Run:  out/minikube-linux-amd64 -p ha-493461 stop --alsologtostderr -v 5
E1213 08:51:28.706177   13307 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22128-9390/.minikube/profiles/functional-888658/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
ha_test.go:464: (dbg) Done: out/minikube-linux-amd64 -p ha-493461 stop --alsologtostderr -v 5: (42.61437379s)
ha_test.go:469: (dbg) Run:  out/minikube-linux-amd64 -p ha-493461 start --wait true --alsologtostderr -v 5
E1213 08:52:37.332109   13307 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22128-9390/.minikube/profiles/addons-527167/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1213 08:53:44.844292   13307 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22128-9390/.minikube/profiles/functional-888658/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1213 08:54:00.411761   13307 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22128-9390/.minikube/profiles/addons-527167/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
ha_test.go:469: (dbg) Done: out/minikube-linux-amd64 -p ha-493461 start --wait true --alsologtostderr -v 5: (2m3.902278808s)
ha_test.go:474: (dbg) Run:  out/minikube-linux-amd64 -p ha-493461 node list --alsologtostderr -v 5
--- PASS: TestMultiControlPlane/serial/RestartClusterKeepsNodes (166.67s)

                                                
                                    
x
+
TestMultiControlPlane/serial/DeleteSecondaryNode (7.56s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/DeleteSecondaryNode
ha_test.go:489: (dbg) Run:  out/minikube-linux-amd64 -p ha-493461 node delete m03 --alsologtostderr -v 5
ha_test.go:489: (dbg) Done: out/minikube-linux-amd64 -p ha-493461 node delete m03 --alsologtostderr -v 5: (6.867903393s)
ha_test.go:495: (dbg) Run:  out/minikube-linux-amd64 -p ha-493461 status --alsologtostderr -v 5
ha_test.go:513: (dbg) Run:  kubectl get nodes
ha_test.go:521: (dbg) Run:  kubectl get nodes -o "go-template='{{range .items}}{{range .status.conditions}}{{if eq .type "Ready"}} {{.status}}{{"\n"}}{{end}}{{end}}{{end}}'"
--- PASS: TestMultiControlPlane/serial/DeleteSecondaryNode (7.56s)

                                                
                                    
x
+
TestMultiControlPlane/serial/DegradedAfterSecondaryNodeDelete (0.56s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/DegradedAfterSecondaryNodeDelete
ha_test.go:392: (dbg) Run:  out/minikube-linux-amd64 profile list --output json
--- PASS: TestMultiControlPlane/serial/DegradedAfterSecondaryNodeDelete (0.56s)

                                                
                                    
x
+
TestMultiControlPlane/serial/StopCluster (40.23s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/StopCluster
ha_test.go:533: (dbg) Run:  out/minikube-linux-amd64 -p ha-493461 stop --alsologtostderr -v 5
E1213 08:54:12.547657   13307 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22128-9390/.minikube/profiles/functional-888658/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1213 08:54:18.852416   13307 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22128-9390/.minikube/profiles/functional-016924/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
ha_test.go:533: (dbg) Done: out/minikube-linux-amd64 -p ha-493461 stop --alsologtostderr -v 5: (40.158546764s)
ha_test.go:539: (dbg) Run:  out/minikube-linux-amd64 -p ha-493461 status --alsologtostderr -v 5
ha_test.go:539: (dbg) Non-zero exit: out/minikube-linux-amd64 -p ha-493461 status --alsologtostderr -v 5: exit status 7 (69.39587ms)

                                                
                                                
-- stdout --
	ha-493461
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	
	ha-493461-m02
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	
	ha-493461-m04
	type: Worker
	host: Stopped
	kubelet: Stopped
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1213 08:54:49.279100   27506 out.go:360] Setting OutFile to fd 1 ...
	I1213 08:54:49.279364   27506 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1213 08:54:49.279373   27506 out.go:374] Setting ErrFile to fd 2...
	I1213 08:54:49.279377   27506 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1213 08:54:49.279593   27506 root.go:338] Updating PATH: /home/jenkins/minikube-integration/22128-9390/.minikube/bin
	I1213 08:54:49.279795   27506 out.go:368] Setting JSON to false
	I1213 08:54:49.279827   27506 mustload.go:66] Loading cluster: ha-493461
	I1213 08:54:49.279982   27506 notify.go:221] Checking for updates...
	I1213 08:54:49.280243   27506 config.go:182] Loaded profile config "ha-493461": Driver=kvm2, ContainerRuntime=docker, KubernetesVersion=v1.34.2
	I1213 08:54:49.280270   27506 status.go:174] checking status of ha-493461 ...
	I1213 08:54:49.282650   27506 status.go:371] ha-493461 host status = "Stopped" (err=<nil>)
	I1213 08:54:49.282677   27506 status.go:384] host is not running, skipping remaining checks
	I1213 08:54:49.282685   27506 status.go:176] ha-493461 status: &{Name:ha-493461 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I1213 08:54:49.282708   27506 status.go:174] checking status of ha-493461-m02 ...
	I1213 08:54:49.284127   27506 status.go:371] ha-493461-m02 host status = "Stopped" (err=<nil>)
	I1213 08:54:49.284146   27506 status.go:384] host is not running, skipping remaining checks
	I1213 08:54:49.284151   27506 status.go:176] ha-493461-m02 status: &{Name:ha-493461-m02 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I1213 08:54:49.284168   27506 status.go:174] checking status of ha-493461-m04 ...
	I1213 08:54:49.285564   27506 status.go:371] ha-493461-m04 host status = "Stopped" (err=<nil>)
	I1213 08:54:49.285584   27506 status.go:384] host is not running, skipping remaining checks
	I1213 08:54:49.285590   27506 status.go:176] ha-493461-m04 status: &{Name:ha-493461-m04 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:true TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
--- PASS: TestMultiControlPlane/serial/StopCluster (40.23s)

                                                
                                    
x
+
TestMultiControlPlane/serial/RestartCluster (137.73s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/RestartCluster
ha_test.go:562: (dbg) Run:  out/minikube-linux-amd64 -p ha-493461 start --wait true --alsologtostderr -v 5 --driver=kvm2 
ha_test.go:562: (dbg) Done: out/minikube-linux-amd64 -p ha-493461 start --wait true --alsologtostderr -v 5 --driver=kvm2 : (2m17.057241644s)
ha_test.go:568: (dbg) Run:  out/minikube-linux-amd64 -p ha-493461 status --alsologtostderr -v 5
ha_test.go:586: (dbg) Run:  kubectl get nodes
ha_test.go:594: (dbg) Run:  kubectl get nodes -o "go-template='{{range .items}}{{range .status.conditions}}{{if eq .type "Ready"}} {{.status}}{{"\n"}}{{end}}{{end}}{{end}}'"
--- PASS: TestMultiControlPlane/serial/RestartCluster (137.73s)

                                                
                                    
x
+
TestMultiControlPlane/serial/DegradedAfterClusterRestart (0.56s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/DegradedAfterClusterRestart
ha_test.go:392: (dbg) Run:  out/minikube-linux-amd64 profile list --output json
--- PASS: TestMultiControlPlane/serial/DegradedAfterClusterRestart (0.56s)

                                                
                                    
x
+
TestMultiControlPlane/serial/AddSecondaryNode (93.41s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/AddSecondaryNode
ha_test.go:607: (dbg) Run:  out/minikube-linux-amd64 -p ha-493461 node add --control-plane --alsologtostderr -v 5
E1213 08:57:37.333505   13307 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22128-9390/.minikube/profiles/addons-527167/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
ha_test.go:607: (dbg) Done: out/minikube-linux-amd64 -p ha-493461 node add --control-plane --alsologtostderr -v 5: (1m32.652068803s)
ha_test.go:613: (dbg) Run:  out/minikube-linux-amd64 -p ha-493461 status --alsologtostderr -v 5
--- PASS: TestMultiControlPlane/serial/AddSecondaryNode (93.41s)

                                                
                                    
x
+
TestMultiControlPlane/serial/HAppyAfterSecondaryNodeAdd (0.74s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/HAppyAfterSecondaryNodeAdd
ha_test.go:281: (dbg) Run:  out/minikube-linux-amd64 profile list --output json
--- PASS: TestMultiControlPlane/serial/HAppyAfterSecondaryNodeAdd (0.74s)

                                                
                                    
x
+
TestImageBuild/serial/Setup (45.75s)

                                                
                                                
=== RUN   TestImageBuild/serial/Setup
image_test.go:69: (dbg) Run:  out/minikube-linux-amd64 start -p image-073345 --driver=kvm2 
E1213 08:59:18.852946   13307 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22128-9390/.minikube/profiles/functional-016924/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
image_test.go:69: (dbg) Done: out/minikube-linux-amd64 start -p image-073345 --driver=kvm2 : (45.74905468s)
--- PASS: TestImageBuild/serial/Setup (45.75s)

                                                
                                    
x
+
TestImageBuild/serial/NormalBuild (1.69s)

                                                
                                                
=== RUN   TestImageBuild/serial/NormalBuild
image_test.go:78: (dbg) Run:  out/minikube-linux-amd64 image build -t aaa:latest ./testdata/image-build/test-normal -p image-073345
image_test.go:78: (dbg) Done: out/minikube-linux-amd64 image build -t aaa:latest ./testdata/image-build/test-normal -p image-073345: (1.68650185s)
--- PASS: TestImageBuild/serial/NormalBuild (1.69s)

                                                
                                    
x
+
TestImageBuild/serial/BuildWithBuildArg (1.02s)

                                                
                                                
=== RUN   TestImageBuild/serial/BuildWithBuildArg
image_test.go:99: (dbg) Run:  out/minikube-linux-amd64 image build -t aaa:latest --build-opt=build-arg=ENV_A=test_env_str --build-opt=no-cache ./testdata/image-build/test-arg -p image-073345
image_test.go:99: (dbg) Done: out/minikube-linux-amd64 image build -t aaa:latest --build-opt=build-arg=ENV_A=test_env_str --build-opt=no-cache ./testdata/image-build/test-arg -p image-073345: (1.024020036s)
--- PASS: TestImageBuild/serial/BuildWithBuildArg (1.02s)

                                                
                                    
x
+
TestImageBuild/serial/BuildWithDockerIgnore (0.73s)

                                                
                                                
=== RUN   TestImageBuild/serial/BuildWithDockerIgnore
image_test.go:133: (dbg) Run:  out/minikube-linux-amd64 image build -t aaa:latest ./testdata/image-build/test-normal --build-opt=no-cache -p image-073345
--- PASS: TestImageBuild/serial/BuildWithDockerIgnore (0.73s)

                                                
                                    
x
+
TestImageBuild/serial/BuildWithSpecifiedDockerfile (1.18s)

                                                
                                                
=== RUN   TestImageBuild/serial/BuildWithSpecifiedDockerfile
image_test.go:88: (dbg) Run:  out/minikube-linux-amd64 image build -t aaa:latest -f inner/Dockerfile ./testdata/image-build/test-f -p image-073345
image_test.go:88: (dbg) Done: out/minikube-linux-amd64 image build -t aaa:latest -f inner/Dockerfile ./testdata/image-build/test-f -p image-073345: (1.177007924s)
--- PASS: TestImageBuild/serial/BuildWithSpecifiedDockerfile (1.18s)

                                                
                                    
x
+
TestJSONOutput/start/Command (91.7s)

                                                
                                                
=== RUN   TestJSONOutput/start/Command
json_output_test.go:63: (dbg) Run:  out/minikube-linux-amd64 start -p json-output-251666 --output=json --user=testUser --memory=3072 --wait=true --driver=kvm2 
E1213 09:00:41.918827   13307 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22128-9390/.minikube/profiles/functional-016924/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
json_output_test.go:63: (dbg) Done: out/minikube-linux-amd64 start -p json-output-251666 --output=json --user=testUser --memory=3072 --wait=true --driver=kvm2 : (1m31.696444557s)
--- PASS: TestJSONOutput/start/Command (91.70s)

                                                
                                    
x
+
TestJSONOutput/start/Audit (0s)

                                                
                                                
=== RUN   TestJSONOutput/start/Audit
--- PASS: TestJSONOutput/start/Audit (0.00s)

                                                
                                    
x
+
TestJSONOutput/start/parallel/DistinctCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/start/parallel/DistinctCurrentSteps
=== PAUSE TestJSONOutput/start/parallel/DistinctCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/start/parallel/DistinctCurrentSteps
--- PASS: TestJSONOutput/start/parallel/DistinctCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/start/parallel/IncreasingCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/start/parallel/IncreasingCurrentSteps
=== PAUSE TestJSONOutput/start/parallel/IncreasingCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/start/parallel/IncreasingCurrentSteps
--- PASS: TestJSONOutput/start/parallel/IncreasingCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/pause/Command (0.65s)

                                                
                                                
=== RUN   TestJSONOutput/pause/Command
json_output_test.go:63: (dbg) Run:  out/minikube-linux-amd64 pause -p json-output-251666 --output=json --user=testUser
--- PASS: TestJSONOutput/pause/Command (0.65s)

                                                
                                    
x
+
TestJSONOutput/pause/Audit (0s)

                                                
                                                
=== RUN   TestJSONOutput/pause/Audit
--- PASS: TestJSONOutput/pause/Audit (0.00s)

                                                
                                    
x
+
TestJSONOutput/pause/parallel/DistinctCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/pause/parallel/DistinctCurrentSteps
=== PAUSE TestJSONOutput/pause/parallel/DistinctCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/pause/parallel/DistinctCurrentSteps
--- PASS: TestJSONOutput/pause/parallel/DistinctCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/pause/parallel/IncreasingCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/pause/parallel/IncreasingCurrentSteps
=== PAUSE TestJSONOutput/pause/parallel/IncreasingCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/pause/parallel/IncreasingCurrentSteps
--- PASS: TestJSONOutput/pause/parallel/IncreasingCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/unpause/Command (0.63s)

                                                
                                                
=== RUN   TestJSONOutput/unpause/Command
json_output_test.go:63: (dbg) Run:  out/minikube-linux-amd64 unpause -p json-output-251666 --output=json --user=testUser
--- PASS: TestJSONOutput/unpause/Command (0.63s)

                                                
                                    
x
+
TestJSONOutput/unpause/Audit (0s)

                                                
                                                
=== RUN   TestJSONOutput/unpause/Audit
--- PASS: TestJSONOutput/unpause/Audit (0.00s)

                                                
                                    
x
+
TestJSONOutput/unpause/parallel/DistinctCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/unpause/parallel/DistinctCurrentSteps
=== PAUSE TestJSONOutput/unpause/parallel/DistinctCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/unpause/parallel/DistinctCurrentSteps
--- PASS: TestJSONOutput/unpause/parallel/DistinctCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/unpause/parallel/IncreasingCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/unpause/parallel/IncreasingCurrentSteps
=== PAUSE TestJSONOutput/unpause/parallel/IncreasingCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/unpause/parallel/IncreasingCurrentSteps
--- PASS: TestJSONOutput/unpause/parallel/IncreasingCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/stop/Command (14.47s)

                                                
                                                
=== RUN   TestJSONOutput/stop/Command
json_output_test.go:63: (dbg) Run:  out/minikube-linux-amd64 stop -p json-output-251666 --output=json --user=testUser
json_output_test.go:63: (dbg) Done: out/minikube-linux-amd64 stop -p json-output-251666 --output=json --user=testUser: (14.472801771s)
--- PASS: TestJSONOutput/stop/Command (14.47s)

                                                
                                    
x
+
TestJSONOutput/stop/Audit (0s)

                                                
                                                
=== RUN   TestJSONOutput/stop/Audit
--- PASS: TestJSONOutput/stop/Audit (0.00s)

                                                
                                    
x
+
TestJSONOutput/stop/parallel/DistinctCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/stop/parallel/DistinctCurrentSteps
=== PAUSE TestJSONOutput/stop/parallel/DistinctCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/stop/parallel/DistinctCurrentSteps
--- PASS: TestJSONOutput/stop/parallel/DistinctCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/stop/parallel/IncreasingCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/stop/parallel/IncreasingCurrentSteps
=== PAUSE TestJSONOutput/stop/parallel/IncreasingCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/stop/parallel/IncreasingCurrentSteps
--- PASS: TestJSONOutput/stop/parallel/IncreasingCurrentSteps (0.00s)

                                                
                                    
x
+
TestErrorJSONOutput (0.25s)

                                                
                                                
=== RUN   TestErrorJSONOutput
json_output_test.go:160: (dbg) Run:  out/minikube-linux-amd64 start -p json-output-error-820515 --memory=3072 --output=json --wait=true --driver=fail
json_output_test.go:160: (dbg) Non-zero exit: out/minikube-linux-amd64 start -p json-output-error-820515 --memory=3072 --output=json --wait=true --driver=fail: exit status 56 (82.55063ms)

                                                
                                                
-- stdout --
	{"specversion":"1.0","id":"c4aab6e9-5254-4474-a16a-e6eedef78c19","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"0","message":"[json-output-error-820515] minikube v1.37.0 on Ubuntu 22.04 (kvm/amd64)","name":"Initial Minikube Setup","totalsteps":"19"}}
	{"specversion":"1.0","id":"a5d5b9a3-3149-4d04-9d46-22bdebc4f5ba","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_LOCATION=22128"}}
	{"specversion":"1.0","id":"db65c467-b96b-4599-93c4-fe64434272bd","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true"}}
	{"specversion":"1.0","id":"3669b9ad-ee4a-40f3-a4ef-fe75e77ea44c","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"KUBECONFIG=/home/jenkins/minikube-integration/22128-9390/kubeconfig"}}
	{"specversion":"1.0","id":"6a47dff5-738e-4a97-989d-f38b873a061f","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_HOME=/home/jenkins/minikube-integration/22128-9390/.minikube"}}
	{"specversion":"1.0","id":"6ce45ef1-a2ec-4543-a5bb-59fa55a1435a","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_BIN=out/minikube-linux-amd64"}}
	{"specversion":"1.0","id":"13a0047b-ac64-4bea-9cf3-2f65f73f0a01","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_FORCE_SYSTEMD="}}
	{"specversion":"1.0","id":"3154e7de-e78b-4ff8-b59d-fae7a5cd9948","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.error","datacontenttype":"application/json","data":{"advice":"","exitcode":"56","issues":"","message":"The driver 'fail' is not supported on linux/amd64","name":"DRV_UNSUPPORTED_OS","url":""}}

                                                
                                                
-- /stdout --
helpers_test.go:176: Cleaning up "json-output-error-820515" profile ...
helpers_test.go:179: (dbg) Run:  out/minikube-linux-amd64 delete -p json-output-error-820515
--- PASS: TestErrorJSONOutput (0.25s)

                                                
                                    
x
+
TestMainNoArgs (0.06s)

                                                
                                                
=== RUN   TestMainNoArgs
main_test.go:70: (dbg) Run:  out/minikube-linux-amd64
--- PASS: TestMainNoArgs (0.06s)

                                                
                                    
x
+
TestMinikubeProfile (96.67s)

                                                
                                                
=== RUN   TestMinikubeProfile
minikube_profile_test.go:44: (dbg) Run:  out/minikube-linux-amd64 start -p first-789228 --driver=kvm2 
minikube_profile_test.go:44: (dbg) Done: out/minikube-linux-amd64 start -p first-789228 --driver=kvm2 : (47.39976322s)
minikube_profile_test.go:44: (dbg) Run:  out/minikube-linux-amd64 start -p second-792159 --driver=kvm2 
E1213 09:02:37.333955   13307 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22128-9390/.minikube/profiles/addons-527167/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
minikube_profile_test.go:44: (dbg) Done: out/minikube-linux-amd64 start -p second-792159 --driver=kvm2 : (46.547102345s)
minikube_profile_test.go:51: (dbg) Run:  out/minikube-linux-amd64 profile first-789228
minikube_profile_test.go:55: (dbg) Run:  out/minikube-linux-amd64 profile list -ojson
minikube_profile_test.go:51: (dbg) Run:  out/minikube-linux-amd64 profile second-792159
minikube_profile_test.go:55: (dbg) Run:  out/minikube-linux-amd64 profile list -ojson
helpers_test.go:176: Cleaning up "second-792159" profile ...
helpers_test.go:179: (dbg) Run:  out/minikube-linux-amd64 delete -p second-792159
helpers_test.go:176: Cleaning up "first-789228" profile ...
helpers_test.go:179: (dbg) Run:  out/minikube-linux-amd64 delete -p first-789228
--- PASS: TestMinikubeProfile (96.67s)

                                                
                                    
x
+
TestMountStart/serial/StartWithMountFirst (24.57s)

                                                
                                                
=== RUN   TestMountStart/serial/StartWithMountFirst
mount_start_test.go:118: (dbg) Run:  out/minikube-linux-amd64 start -p mount-start-1-289331 --memory=3072 --mount-string /tmp/TestMountStartserial3609967030/001:/minikube-host --mount-gid 0 --mount-msize 6543 --mount-port 46464 --mount-uid 0 --no-kubernetes --driver=kvm2 
mount_start_test.go:118: (dbg) Done: out/minikube-linux-amd64 start -p mount-start-1-289331 --memory=3072 --mount-string /tmp/TestMountStartserial3609967030/001:/minikube-host --mount-gid 0 --mount-msize 6543 --mount-port 46464 --mount-uid 0 --no-kubernetes --driver=kvm2 : (23.567163601s)
--- PASS: TestMountStart/serial/StartWithMountFirst (24.57s)

                                                
                                    
x
+
TestMountStart/serial/VerifyMountFirst (0.33s)

                                                
                                                
=== RUN   TestMountStart/serial/VerifyMountFirst
mount_start_test.go:134: (dbg) Run:  out/minikube-linux-amd64 -p mount-start-1-289331 ssh -- ls /minikube-host
mount_start_test.go:147: (dbg) Run:  out/minikube-linux-amd64 -p mount-start-1-289331 ssh -- findmnt --json /minikube-host
--- PASS: TestMountStart/serial/VerifyMountFirst (0.33s)

                                                
                                    
x
+
TestMountStart/serial/StartWithMountSecond (25.18s)

                                                
                                                
=== RUN   TestMountStart/serial/StartWithMountSecond
mount_start_test.go:118: (dbg) Run:  out/minikube-linux-amd64 start -p mount-start-2-309279 --memory=3072 --mount-string /tmp/TestMountStartserial3609967030/001:/minikube-host --mount-gid 0 --mount-msize 6543 --mount-port 46465 --mount-uid 0 --no-kubernetes --driver=kvm2 
E1213 09:03:44.848511   13307 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22128-9390/.minikube/profiles/functional-888658/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
mount_start_test.go:118: (dbg) Done: out/minikube-linux-amd64 start -p mount-start-2-309279 --memory=3072 --mount-string /tmp/TestMountStartserial3609967030/001:/minikube-host --mount-gid 0 --mount-msize 6543 --mount-port 46465 --mount-uid 0 --no-kubernetes --driver=kvm2 : (24.184095243s)
--- PASS: TestMountStart/serial/StartWithMountSecond (25.18s)

                                                
                                    
x
+
TestMountStart/serial/VerifyMountSecond (0.32s)

                                                
                                                
=== RUN   TestMountStart/serial/VerifyMountSecond
mount_start_test.go:134: (dbg) Run:  out/minikube-linux-amd64 -p mount-start-2-309279 ssh -- ls /minikube-host
mount_start_test.go:147: (dbg) Run:  out/minikube-linux-amd64 -p mount-start-2-309279 ssh -- findmnt --json /minikube-host
--- PASS: TestMountStart/serial/VerifyMountSecond (0.32s)

                                                
                                    
x
+
TestMountStart/serial/DeleteFirst (0.71s)

                                                
                                                
=== RUN   TestMountStart/serial/DeleteFirst
pause_test.go:132: (dbg) Run:  out/minikube-linux-amd64 delete -p mount-start-1-289331 --alsologtostderr -v=5
--- PASS: TestMountStart/serial/DeleteFirst (0.71s)

                                                
                                    
x
+
TestMountStart/serial/VerifyMountPostDelete (0.33s)

                                                
                                                
=== RUN   TestMountStart/serial/VerifyMountPostDelete
mount_start_test.go:134: (dbg) Run:  out/minikube-linux-amd64 -p mount-start-2-309279 ssh -- ls /minikube-host
mount_start_test.go:147: (dbg) Run:  out/minikube-linux-amd64 -p mount-start-2-309279 ssh -- findmnt --json /minikube-host
--- PASS: TestMountStart/serial/VerifyMountPostDelete (0.33s)

                                                
                                    
x
+
TestMountStart/serial/Stop (1.36s)

                                                
                                                
=== RUN   TestMountStart/serial/Stop
mount_start_test.go:196: (dbg) Run:  out/minikube-linux-amd64 stop -p mount-start-2-309279
mount_start_test.go:196: (dbg) Done: out/minikube-linux-amd64 stop -p mount-start-2-309279: (1.357375868s)
--- PASS: TestMountStart/serial/Stop (1.36s)

                                                
                                    
x
+
TestMountStart/serial/RestartStopped (23.09s)

                                                
                                                
=== RUN   TestMountStart/serial/RestartStopped
mount_start_test.go:207: (dbg) Run:  out/minikube-linux-amd64 start -p mount-start-2-309279
mount_start_test.go:207: (dbg) Done: out/minikube-linux-amd64 start -p mount-start-2-309279: (22.085492783s)
--- PASS: TestMountStart/serial/RestartStopped (23.09s)

                                                
                                    
x
+
TestMountStart/serial/VerifyMountPostStop (0.34s)

                                                
                                                
=== RUN   TestMountStart/serial/VerifyMountPostStop
mount_start_test.go:134: (dbg) Run:  out/minikube-linux-amd64 -p mount-start-2-309279 ssh -- ls /minikube-host
mount_start_test.go:147: (dbg) Run:  out/minikube-linux-amd64 -p mount-start-2-309279 ssh -- findmnt --json /minikube-host
--- PASS: TestMountStart/serial/VerifyMountPostStop (0.34s)

                                                
                                    
x
+
TestMultiNode/serial/FreshStart2Nodes (121.65s)

                                                
                                                
=== RUN   TestMultiNode/serial/FreshStart2Nodes
multinode_test.go:96: (dbg) Run:  out/minikube-linux-amd64 start -p multinode-026667 --wait=true --memory=3072 --nodes=2 -v=5 --alsologtostderr --driver=kvm2 
E1213 09:04:18.852457   13307 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22128-9390/.minikube/profiles/functional-016924/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1213 09:05:07.909613   13307 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22128-9390/.minikube/profiles/functional-888658/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
multinode_test.go:96: (dbg) Done: out/minikube-linux-amd64 start -p multinode-026667 --wait=true --memory=3072 --nodes=2 -v=5 --alsologtostderr --driver=kvm2 : (2m1.306928949s)
multinode_test.go:102: (dbg) Run:  out/minikube-linux-amd64 -p multinode-026667 status --alsologtostderr
--- PASS: TestMultiNode/serial/FreshStart2Nodes (121.65s)

                                                
                                    
x
+
TestMultiNode/serial/DeployApp2Nodes (5.91s)

                                                
                                                
=== RUN   TestMultiNode/serial/DeployApp2Nodes
multinode_test.go:493: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-026667 -- apply -f ./testdata/multinodes/multinode-pod-dns-test.yaml
multinode_test.go:498: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-026667 -- rollout status deployment/busybox
multinode_test.go:498: (dbg) Done: out/minikube-linux-amd64 kubectl -p multinode-026667 -- rollout status deployment/busybox: (4.092746075s)
multinode_test.go:505: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-026667 -- get pods -o jsonpath='{.items[*].status.podIP}'
multinode_test.go:528: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-026667 -- get pods -o jsonpath='{.items[*].metadata.name}'
multinode_test.go:536: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-026667 -- exec busybox-7b57f96db7-qgmtq -- nslookup kubernetes.io
multinode_test.go:536: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-026667 -- exec busybox-7b57f96db7-wbm2n -- nslookup kubernetes.io
multinode_test.go:546: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-026667 -- exec busybox-7b57f96db7-qgmtq -- nslookup kubernetes.default
multinode_test.go:546: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-026667 -- exec busybox-7b57f96db7-wbm2n -- nslookup kubernetes.default
multinode_test.go:554: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-026667 -- exec busybox-7b57f96db7-qgmtq -- nslookup kubernetes.default.svc.cluster.local
multinode_test.go:554: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-026667 -- exec busybox-7b57f96db7-wbm2n -- nslookup kubernetes.default.svc.cluster.local
--- PASS: TestMultiNode/serial/DeployApp2Nodes (5.91s)

                                                
                                    
x
+
TestMultiNode/serial/PingHostFrom2Pods (0.98s)

                                                
                                                
=== RUN   TestMultiNode/serial/PingHostFrom2Pods
multinode_test.go:564: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-026667 -- get pods -o jsonpath='{.items[*].metadata.name}'
multinode_test.go:572: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-026667 -- exec busybox-7b57f96db7-qgmtq -- sh -c "nslookup host.minikube.internal | awk 'NR==5' | cut -d' ' -f3"
multinode_test.go:583: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-026667 -- exec busybox-7b57f96db7-qgmtq -- sh -c "ping -c 1 192.168.39.1"
multinode_test.go:572: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-026667 -- exec busybox-7b57f96db7-wbm2n -- sh -c "nslookup host.minikube.internal | awk 'NR==5' | cut -d' ' -f3"
multinode_test.go:583: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-026667 -- exec busybox-7b57f96db7-wbm2n -- sh -c "ping -c 1 192.168.39.1"
--- PASS: TestMultiNode/serial/PingHostFrom2Pods (0.98s)

                                                
                                    
x
+
TestMultiNode/serial/AddNode (49.41s)

                                                
                                                
=== RUN   TestMultiNode/serial/AddNode
multinode_test.go:121: (dbg) Run:  out/minikube-linux-amd64 node add -p multinode-026667 -v=5 --alsologtostderr
multinode_test.go:121: (dbg) Done: out/minikube-linux-amd64 node add -p multinode-026667 -v=5 --alsologtostderr: (48.918113293s)
multinode_test.go:127: (dbg) Run:  out/minikube-linux-amd64 -p multinode-026667 status --alsologtostderr
--- PASS: TestMultiNode/serial/AddNode (49.41s)

                                                
                                    
x
+
TestMultiNode/serial/MultiNodeLabels (0.07s)

                                                
                                                
=== RUN   TestMultiNode/serial/MultiNodeLabels
multinode_test.go:221: (dbg) Run:  kubectl --context multinode-026667 get nodes -o "jsonpath=[{range .items[*]}{.metadata.labels},{end}]"
--- PASS: TestMultiNode/serial/MultiNodeLabels (0.07s)

                                                
                                    
x
+
TestMultiNode/serial/ProfileList (0.48s)

                                                
                                                
=== RUN   TestMultiNode/serial/ProfileList
multinode_test.go:143: (dbg) Run:  out/minikube-linux-amd64 profile list --output json
--- PASS: TestMultiNode/serial/ProfileList (0.48s)

                                                
                                    
x
+
TestMultiNode/serial/CopyFile (6.25s)

                                                
                                                
=== RUN   TestMultiNode/serial/CopyFile
multinode_test.go:184: (dbg) Run:  out/minikube-linux-amd64 -p multinode-026667 status --output json --alsologtostderr
helpers_test.go:574: (dbg) Run:  out/minikube-linux-amd64 -p multinode-026667 cp testdata/cp-test.txt multinode-026667:/home/docker/cp-test.txt
helpers_test.go:552: (dbg) Run:  out/minikube-linux-amd64 -p multinode-026667 ssh -n multinode-026667 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:574: (dbg) Run:  out/minikube-linux-amd64 -p multinode-026667 cp multinode-026667:/home/docker/cp-test.txt /tmp/TestMultiNodeserialCopyFile1066012845/001/cp-test_multinode-026667.txt
helpers_test.go:552: (dbg) Run:  out/minikube-linux-amd64 -p multinode-026667 ssh -n multinode-026667 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:574: (dbg) Run:  out/minikube-linux-amd64 -p multinode-026667 cp multinode-026667:/home/docker/cp-test.txt multinode-026667-m02:/home/docker/cp-test_multinode-026667_multinode-026667-m02.txt
helpers_test.go:552: (dbg) Run:  out/minikube-linux-amd64 -p multinode-026667 ssh -n multinode-026667 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:552: (dbg) Run:  out/minikube-linux-amd64 -p multinode-026667 ssh -n multinode-026667-m02 "sudo cat /home/docker/cp-test_multinode-026667_multinode-026667-m02.txt"
helpers_test.go:574: (dbg) Run:  out/minikube-linux-amd64 -p multinode-026667 cp multinode-026667:/home/docker/cp-test.txt multinode-026667-m03:/home/docker/cp-test_multinode-026667_multinode-026667-m03.txt
helpers_test.go:552: (dbg) Run:  out/minikube-linux-amd64 -p multinode-026667 ssh -n multinode-026667 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:552: (dbg) Run:  out/minikube-linux-amd64 -p multinode-026667 ssh -n multinode-026667-m03 "sudo cat /home/docker/cp-test_multinode-026667_multinode-026667-m03.txt"
helpers_test.go:574: (dbg) Run:  out/minikube-linux-amd64 -p multinode-026667 cp testdata/cp-test.txt multinode-026667-m02:/home/docker/cp-test.txt
helpers_test.go:552: (dbg) Run:  out/minikube-linux-amd64 -p multinode-026667 ssh -n multinode-026667-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:574: (dbg) Run:  out/minikube-linux-amd64 -p multinode-026667 cp multinode-026667-m02:/home/docker/cp-test.txt /tmp/TestMultiNodeserialCopyFile1066012845/001/cp-test_multinode-026667-m02.txt
helpers_test.go:552: (dbg) Run:  out/minikube-linux-amd64 -p multinode-026667 ssh -n multinode-026667-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:574: (dbg) Run:  out/minikube-linux-amd64 -p multinode-026667 cp multinode-026667-m02:/home/docker/cp-test.txt multinode-026667:/home/docker/cp-test_multinode-026667-m02_multinode-026667.txt
helpers_test.go:552: (dbg) Run:  out/minikube-linux-amd64 -p multinode-026667 ssh -n multinode-026667-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:552: (dbg) Run:  out/minikube-linux-amd64 -p multinode-026667 ssh -n multinode-026667 "sudo cat /home/docker/cp-test_multinode-026667-m02_multinode-026667.txt"
helpers_test.go:574: (dbg) Run:  out/minikube-linux-amd64 -p multinode-026667 cp multinode-026667-m02:/home/docker/cp-test.txt multinode-026667-m03:/home/docker/cp-test_multinode-026667-m02_multinode-026667-m03.txt
helpers_test.go:552: (dbg) Run:  out/minikube-linux-amd64 -p multinode-026667 ssh -n multinode-026667-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:552: (dbg) Run:  out/minikube-linux-amd64 -p multinode-026667 ssh -n multinode-026667-m03 "sudo cat /home/docker/cp-test_multinode-026667-m02_multinode-026667-m03.txt"
helpers_test.go:574: (dbg) Run:  out/minikube-linux-amd64 -p multinode-026667 cp testdata/cp-test.txt multinode-026667-m03:/home/docker/cp-test.txt
helpers_test.go:552: (dbg) Run:  out/minikube-linux-amd64 -p multinode-026667 ssh -n multinode-026667-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:574: (dbg) Run:  out/minikube-linux-amd64 -p multinode-026667 cp multinode-026667-m03:/home/docker/cp-test.txt /tmp/TestMultiNodeserialCopyFile1066012845/001/cp-test_multinode-026667-m03.txt
helpers_test.go:552: (dbg) Run:  out/minikube-linux-amd64 -p multinode-026667 ssh -n multinode-026667-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:574: (dbg) Run:  out/minikube-linux-amd64 -p multinode-026667 cp multinode-026667-m03:/home/docker/cp-test.txt multinode-026667:/home/docker/cp-test_multinode-026667-m03_multinode-026667.txt
helpers_test.go:552: (dbg) Run:  out/minikube-linux-amd64 -p multinode-026667 ssh -n multinode-026667-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:552: (dbg) Run:  out/minikube-linux-amd64 -p multinode-026667 ssh -n multinode-026667 "sudo cat /home/docker/cp-test_multinode-026667-m03_multinode-026667.txt"
helpers_test.go:574: (dbg) Run:  out/minikube-linux-amd64 -p multinode-026667 cp multinode-026667-m03:/home/docker/cp-test.txt multinode-026667-m02:/home/docker/cp-test_multinode-026667-m03_multinode-026667-m02.txt
helpers_test.go:552: (dbg) Run:  out/minikube-linux-amd64 -p multinode-026667 ssh -n multinode-026667-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:552: (dbg) Run:  out/minikube-linux-amd64 -p multinode-026667 ssh -n multinode-026667-m02 "sudo cat /home/docker/cp-test_multinode-026667-m03_multinode-026667-m02.txt"
--- PASS: TestMultiNode/serial/CopyFile (6.25s)

                                                
                                    
x
+
TestMultiNode/serial/StopNode (2.47s)

                                                
                                                
=== RUN   TestMultiNode/serial/StopNode
multinode_test.go:248: (dbg) Run:  out/minikube-linux-amd64 -p multinode-026667 node stop m03
multinode_test.go:248: (dbg) Done: out/minikube-linux-amd64 -p multinode-026667 node stop m03: (1.7617189s)
multinode_test.go:254: (dbg) Run:  out/minikube-linux-amd64 -p multinode-026667 status
multinode_test.go:254: (dbg) Non-zero exit: out/minikube-linux-amd64 -p multinode-026667 status: exit status 7 (350.156906ms)

                                                
                                                
-- stdout --
	multinode-026667
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Running
	kubeconfig: Configured
	
	multinode-026667-m02
	type: Worker
	host: Running
	kubelet: Running
	
	multinode-026667-m03
	type: Worker
	host: Stopped
	kubelet: Stopped
	

                                                
                                                
-- /stdout --
multinode_test.go:261: (dbg) Run:  out/minikube-linux-amd64 -p multinode-026667 status --alsologtostderr
multinode_test.go:261: (dbg) Non-zero exit: out/minikube-linux-amd64 -p multinode-026667 status --alsologtostderr: exit status 7 (354.216869ms)

                                                
                                                
-- stdout --
	multinode-026667
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Running
	kubeconfig: Configured
	
	multinode-026667-m02
	type: Worker
	host: Running
	kubelet: Running
	
	multinode-026667-m03
	type: Worker
	host: Stopped
	kubelet: Stopped
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1213 09:07:25.010546   33891 out.go:360] Setting OutFile to fd 1 ...
	I1213 09:07:25.010794   33891 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1213 09:07:25.010803   33891 out.go:374] Setting ErrFile to fd 2...
	I1213 09:07:25.010807   33891 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1213 09:07:25.011044   33891 root.go:338] Updating PATH: /home/jenkins/minikube-integration/22128-9390/.minikube/bin
	I1213 09:07:25.011207   33891 out.go:368] Setting JSON to false
	I1213 09:07:25.011233   33891 mustload.go:66] Loading cluster: multinode-026667
	I1213 09:07:25.011372   33891 notify.go:221] Checking for updates...
	I1213 09:07:25.011618   33891 config.go:182] Loaded profile config "multinode-026667": Driver=kvm2, ContainerRuntime=docker, KubernetesVersion=v1.34.2
	I1213 09:07:25.011635   33891 status.go:174] checking status of multinode-026667 ...
	I1213 09:07:25.013951   33891 status.go:371] multinode-026667 host status = "Running" (err=<nil>)
	I1213 09:07:25.013977   33891 host.go:66] Checking if "multinode-026667" exists ...
	I1213 09:07:25.017212   33891 main.go:143] libmachine: domain multinode-026667 has defined MAC address 52:54:00:55:92:55 in network mk-multinode-026667
	I1213 09:07:25.017745   33891 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:55:92:55", ip: ""} in network mk-multinode-026667: {Iface:virbr1 ExpiryTime:2025-12-13 10:04:33 +0000 UTC Type:0 Mac:52:54:00:55:92:55 Iaid: IPaddr:192.168.39.100 Prefix:24 Hostname:multinode-026667 Clientid:01:52:54:00:55:92:55}
	I1213 09:07:25.017773   33891 main.go:143] libmachine: domain multinode-026667 has defined IP address 192.168.39.100 and MAC address 52:54:00:55:92:55 in network mk-multinode-026667
	I1213 09:07:25.017956   33891 host.go:66] Checking if "multinode-026667" exists ...
	I1213 09:07:25.018202   33891 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1213 09:07:25.020646   33891 main.go:143] libmachine: domain multinode-026667 has defined MAC address 52:54:00:55:92:55 in network mk-multinode-026667
	I1213 09:07:25.021161   33891 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:55:92:55", ip: ""} in network mk-multinode-026667: {Iface:virbr1 ExpiryTime:2025-12-13 10:04:33 +0000 UTC Type:0 Mac:52:54:00:55:92:55 Iaid: IPaddr:192.168.39.100 Prefix:24 Hostname:multinode-026667 Clientid:01:52:54:00:55:92:55}
	I1213 09:07:25.021195   33891 main.go:143] libmachine: domain multinode-026667 has defined IP address 192.168.39.100 and MAC address 52:54:00:55:92:55 in network mk-multinode-026667
	I1213 09:07:25.021381   33891 sshutil.go:53] new ssh client: &{IP:192.168.39.100 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/22128-9390/.minikube/machines/multinode-026667/id_rsa Username:docker}
	I1213 09:07:25.107674   33891 ssh_runner.go:195] Run: systemctl --version
	I1213 09:07:25.114960   33891 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1213 09:07:25.135788   33891 kubeconfig.go:125] found "multinode-026667" server: "https://192.168.39.100:8443"
	I1213 09:07:25.135825   33891 api_server.go:166] Checking apiserver status ...
	I1213 09:07:25.135859   33891 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 09:07:25.158509   33891 ssh_runner.go:195] Run: sudo egrep ^[0-9]+:freezer: /proc/2474/cgroup
	W1213 09:07:25.172697   33891 api_server.go:177] unable to find freezer cgroup: sudo egrep ^[0-9]+:freezer: /proc/2474/cgroup: Process exited with status 1
	stdout:
	
	stderr:
	I1213 09:07:25.172751   33891 ssh_runner.go:195] Run: ls
	I1213 09:07:25.179220   33891 api_server.go:253] Checking apiserver healthz at https://192.168.39.100:8443/healthz ...
	I1213 09:07:25.186743   33891 api_server.go:279] https://192.168.39.100:8443/healthz returned 200:
	ok
	I1213 09:07:25.186772   33891 status.go:463] multinode-026667 apiserver status = Running (err=<nil>)
	I1213 09:07:25.186780   33891 status.go:176] multinode-026667 status: &{Name:multinode-026667 Host:Running Kubelet:Running APIServer:Running Kubeconfig:Configured Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I1213 09:07:25.186798   33891 status.go:174] checking status of multinode-026667-m02 ...
	I1213 09:07:25.188704   33891 status.go:371] multinode-026667-m02 host status = "Running" (err=<nil>)
	I1213 09:07:25.188736   33891 host.go:66] Checking if "multinode-026667-m02" exists ...
	I1213 09:07:25.191488   33891 main.go:143] libmachine: domain multinode-026667-m02 has defined MAC address 52:54:00:9e:71:06 in network mk-multinode-026667
	I1213 09:07:25.192117   33891 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:9e:71:06", ip: ""} in network mk-multinode-026667: {Iface:virbr1 ExpiryTime:2025-12-13 10:05:44 +0000 UTC Type:0 Mac:52:54:00:9e:71:06 Iaid: IPaddr:192.168.39.123 Prefix:24 Hostname:multinode-026667-m02 Clientid:01:52:54:00:9e:71:06}
	I1213 09:07:25.192147   33891 main.go:143] libmachine: domain multinode-026667-m02 has defined IP address 192.168.39.123 and MAC address 52:54:00:9e:71:06 in network mk-multinode-026667
	I1213 09:07:25.192318   33891 host.go:66] Checking if "multinode-026667-m02" exists ...
	I1213 09:07:25.192572   33891 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1213 09:07:25.195000   33891 main.go:143] libmachine: domain multinode-026667-m02 has defined MAC address 52:54:00:9e:71:06 in network mk-multinode-026667
	I1213 09:07:25.195479   33891 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:9e:71:06", ip: ""} in network mk-multinode-026667: {Iface:virbr1 ExpiryTime:2025-12-13 10:05:44 +0000 UTC Type:0 Mac:52:54:00:9e:71:06 Iaid: IPaddr:192.168.39.123 Prefix:24 Hostname:multinode-026667-m02 Clientid:01:52:54:00:9e:71:06}
	I1213 09:07:25.195513   33891 main.go:143] libmachine: domain multinode-026667-m02 has defined IP address 192.168.39.123 and MAC address 52:54:00:9e:71:06 in network mk-multinode-026667
	I1213 09:07:25.195736   33891 sshutil.go:53] new ssh client: &{IP:192.168.39.123 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/22128-9390/.minikube/machines/multinode-026667-m02/id_rsa Username:docker}
	I1213 09:07:25.278328   33891 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1213 09:07:25.297618   33891 status.go:176] multinode-026667-m02 status: &{Name:multinode-026667-m02 Host:Running Kubelet:Running APIServer:Irrelevant Kubeconfig:Irrelevant Worker:true TimeToStop: DockerEnv: PodManEnv:}
	I1213 09:07:25.297669   33891 status.go:174] checking status of multinode-026667-m03 ...
	I1213 09:07:25.299417   33891 status.go:371] multinode-026667-m03 host status = "Stopped" (err=<nil>)
	I1213 09:07:25.299439   33891 status.go:384] host is not running, skipping remaining checks
	I1213 09:07:25.299445   33891 status.go:176] multinode-026667-m03 status: &{Name:multinode-026667-m03 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:true TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
--- PASS: TestMultiNode/serial/StopNode (2.47s)

                                                
                                    
x
+
TestMultiNode/serial/StartAfterStop (41.98s)

                                                
                                                
=== RUN   TestMultiNode/serial/StartAfterStop
multinode_test.go:282: (dbg) Run:  out/minikube-linux-amd64 -p multinode-026667 node start m03 -v=5 --alsologtostderr
E1213 09:07:37.333269   13307 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22128-9390/.minikube/profiles/addons-527167/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
multinode_test.go:282: (dbg) Done: out/minikube-linux-amd64 -p multinode-026667 node start m03 -v=5 --alsologtostderr: (41.42706694s)
multinode_test.go:290: (dbg) Run:  out/minikube-linux-amd64 -p multinode-026667 status -v=5 --alsologtostderr
multinode_test.go:306: (dbg) Run:  kubectl get nodes
--- PASS: TestMultiNode/serial/StartAfterStop (41.98s)

                                                
                                    
x
+
TestMultiNode/serial/RestartKeepsNodes (198.85s)

                                                
                                                
=== RUN   TestMultiNode/serial/RestartKeepsNodes
multinode_test.go:314: (dbg) Run:  out/minikube-linux-amd64 node list -p multinode-026667
multinode_test.go:321: (dbg) Run:  out/minikube-linux-amd64 stop -p multinode-026667
multinode_test.go:321: (dbg) Done: out/minikube-linux-amd64 stop -p multinode-026667: (29.064384777s)
multinode_test.go:326: (dbg) Run:  out/minikube-linux-amd64 start -p multinode-026667 --wait=true -v=5 --alsologtostderr
E1213 09:08:44.846889   13307 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22128-9390/.minikube/profiles/functional-888658/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1213 09:09:18.852425   13307 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22128-9390/.minikube/profiles/functional-016924/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1213 09:10:40.413382   13307 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22128-9390/.minikube/profiles/addons-527167/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
multinode_test.go:326: (dbg) Done: out/minikube-linux-amd64 start -p multinode-026667 --wait=true -v=5 --alsologtostderr: (2m49.652015727s)
multinode_test.go:331: (dbg) Run:  out/minikube-linux-amd64 node list -p multinode-026667
--- PASS: TestMultiNode/serial/RestartKeepsNodes (198.85s)

                                                
                                    
x
+
TestMultiNode/serial/DeleteNode (2.38s)

                                                
                                                
=== RUN   TestMultiNode/serial/DeleteNode
multinode_test.go:416: (dbg) Run:  out/minikube-linux-amd64 -p multinode-026667 node delete m03
multinode_test.go:416: (dbg) Done: out/minikube-linux-amd64 -p multinode-026667 node delete m03: (1.885190121s)
multinode_test.go:422: (dbg) Run:  out/minikube-linux-amd64 -p multinode-026667 status --alsologtostderr
multinode_test.go:436: (dbg) Run:  kubectl get nodes
multinode_test.go:444: (dbg) Run:  kubectl get nodes -o "go-template='{{range .items}}{{range .status.conditions}}{{if eq .type "Ready"}} {{.status}}{{"\n"}}{{end}}{{end}}{{end}}'"
--- PASS: TestMultiNode/serial/DeleteNode (2.38s)

                                                
                                    
x
+
TestMultiNode/serial/StopMultiNode (27.18s)

                                                
                                                
=== RUN   TestMultiNode/serial/StopMultiNode
multinode_test.go:345: (dbg) Run:  out/minikube-linux-amd64 -p multinode-026667 stop
multinode_test.go:345: (dbg) Done: out/minikube-linux-amd64 -p multinode-026667 stop: (27.053622409s)
multinode_test.go:351: (dbg) Run:  out/minikube-linux-amd64 -p multinode-026667 status
multinode_test.go:351: (dbg) Non-zero exit: out/minikube-linux-amd64 -p multinode-026667 status: exit status 7 (65.098875ms)

                                                
                                                
-- stdout --
	multinode-026667
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	
	multinode-026667-m02
	type: Worker
	host: Stopped
	kubelet: Stopped
	

                                                
                                                
-- /stdout --
multinode_test.go:358: (dbg) Run:  out/minikube-linux-amd64 -p multinode-026667 status --alsologtostderr
multinode_test.go:358: (dbg) Non-zero exit: out/minikube-linux-amd64 -p multinode-026667 status --alsologtostderr: exit status 7 (64.086451ms)

                                                
                                                
-- stdout --
	multinode-026667
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	
	multinode-026667-m02
	type: Worker
	host: Stopped
	kubelet: Stopped
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1213 09:11:55.684790   35387 out.go:360] Setting OutFile to fd 1 ...
	I1213 09:11:55.685082   35387 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1213 09:11:55.685092   35387 out.go:374] Setting ErrFile to fd 2...
	I1213 09:11:55.685096   35387 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1213 09:11:55.685304   35387 root.go:338] Updating PATH: /home/jenkins/minikube-integration/22128-9390/.minikube/bin
	I1213 09:11:55.685511   35387 out.go:368] Setting JSON to false
	I1213 09:11:55.685539   35387 mustload.go:66] Loading cluster: multinode-026667
	I1213 09:11:55.685686   35387 notify.go:221] Checking for updates...
	I1213 09:11:55.686033   35387 config.go:182] Loaded profile config "multinode-026667": Driver=kvm2, ContainerRuntime=docker, KubernetesVersion=v1.34.2
	I1213 09:11:55.686054   35387 status.go:174] checking status of multinode-026667 ...
	I1213 09:11:55.688562   35387 status.go:371] multinode-026667 host status = "Stopped" (err=<nil>)
	I1213 09:11:55.688587   35387 status.go:384] host is not running, skipping remaining checks
	I1213 09:11:55.688596   35387 status.go:176] multinode-026667 status: &{Name:multinode-026667 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I1213 09:11:55.688639   35387 status.go:174] checking status of multinode-026667-m02 ...
	I1213 09:11:55.690090   35387 status.go:371] multinode-026667-m02 host status = "Stopped" (err=<nil>)
	I1213 09:11:55.690116   35387 status.go:384] host is not running, skipping remaining checks
	I1213 09:11:55.690123   35387 status.go:176] multinode-026667-m02 status: &{Name:multinode-026667-m02 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:true TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
--- PASS: TestMultiNode/serial/StopMultiNode (27.18s)

                                                
                                    
x
+
TestMultiNode/serial/RestartMultiNode (130.65s)

                                                
                                                
=== RUN   TestMultiNode/serial/RestartMultiNode
multinode_test.go:376: (dbg) Run:  out/minikube-linux-amd64 start -p multinode-026667 --wait=true -v=5 --alsologtostderr --driver=kvm2 
E1213 09:12:37.332661   13307 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22128-9390/.minikube/profiles/addons-527167/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1213 09:13:44.843334   13307 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22128-9390/.minikube/profiles/functional-888658/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
multinode_test.go:376: (dbg) Done: out/minikube-linux-amd64 start -p multinode-026667 --wait=true -v=5 --alsologtostderr --driver=kvm2 : (2m10.166598582s)
multinode_test.go:382: (dbg) Run:  out/minikube-linux-amd64 -p multinode-026667 status --alsologtostderr
multinode_test.go:396: (dbg) Run:  kubectl get nodes
multinode_test.go:404: (dbg) Run:  kubectl get nodes -o "go-template='{{range .items}}{{range .status.conditions}}{{if eq .type "Ready"}} {{.status}}{{"\n"}}{{end}}{{end}}{{end}}'"
--- PASS: TestMultiNode/serial/RestartMultiNode (130.65s)

                                                
                                    
x
+
TestMultiNode/serial/ValidateNameConflict (46.45s)

                                                
                                                
=== RUN   TestMultiNode/serial/ValidateNameConflict
multinode_test.go:455: (dbg) Run:  out/minikube-linux-amd64 node list -p multinode-026667
multinode_test.go:464: (dbg) Run:  out/minikube-linux-amd64 start -p multinode-026667-m02 --driver=kvm2 
multinode_test.go:464: (dbg) Non-zero exit: out/minikube-linux-amd64 start -p multinode-026667-m02 --driver=kvm2 : exit status 14 (82.177417ms)

                                                
                                                
-- stdout --
	* [multinode-026667-m02] minikube v1.37.0 on Ubuntu 22.04 (kvm/amd64)
	  - MINIKUBE_LOCATION=22128
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/22128-9390/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/22128-9390/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-amd64
	  - MINIKUBE_FORCE_SYSTEMD=
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	! Profile name 'multinode-026667-m02' is duplicated with machine name 'multinode-026667-m02' in profile 'multinode-026667'
	X Exiting due to MK_USAGE: Profile name should be unique

                                                
                                                
** /stderr **
multinode_test.go:472: (dbg) Run:  out/minikube-linux-amd64 start -p multinode-026667-m03 --driver=kvm2 
E1213 09:14:18.852466   13307 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22128-9390/.minikube/profiles/functional-016924/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
multinode_test.go:472: (dbg) Done: out/minikube-linux-amd64 start -p multinode-026667-m03 --driver=kvm2 : (45.249016807s)
multinode_test.go:479: (dbg) Run:  out/minikube-linux-amd64 node add -p multinode-026667
multinode_test.go:479: (dbg) Non-zero exit: out/minikube-linux-amd64 node add -p multinode-026667: exit status 80 (213.420806ms)

                                                
                                                
-- stdout --
	* Adding node m03 to cluster multinode-026667 as [worker]
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to GUEST_NODE_ADD: failed to add node: Node multinode-026667-m03 already exists in multinode-026667-m03 profile
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_node_040ea7097fd6ed71e65be9a474587f81f0ccd21d_0.log                    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
multinode_test.go:484: (dbg) Run:  out/minikube-linux-amd64 delete -p multinode-026667-m03
--- PASS: TestMultiNode/serial/ValidateNameConflict (46.45s)

                                                
                                    
x
+
TestPreload (158.9s)

                                                
                                                
=== RUN   TestPreload
preload_test.go:41: (dbg) Run:  out/minikube-linux-amd64 start -p test-preload-134834 --memory=3072 --alsologtostderr --wait=true --preload=false --driver=kvm2 
preload_test.go:41: (dbg) Done: out/minikube-linux-amd64 start -p test-preload-134834 --memory=3072 --alsologtostderr --wait=true --preload=false --driver=kvm2 : (1m33.238033321s)
preload_test.go:49: (dbg) Run:  out/minikube-linux-amd64 -p test-preload-134834 image pull gcr.io/k8s-minikube/busybox
preload_test.go:49: (dbg) Done: out/minikube-linux-amd64 -p test-preload-134834 image pull gcr.io/k8s-minikube/busybox: (2.451418516s)
preload_test.go:55: (dbg) Run:  out/minikube-linux-amd64 stop -p test-preload-134834
preload_test.go:55: (dbg) Done: out/minikube-linux-amd64 stop -p test-preload-134834: (13.793636644s)
preload_test.go:63: (dbg) Run:  out/minikube-linux-amd64 start -p test-preload-134834 --preload=true --alsologtostderr -v=1 --wait=true --driver=kvm2 
E1213 09:17:21.920694   13307 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22128-9390/.minikube/profiles/functional-016924/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
preload_test.go:63: (dbg) Done: out/minikube-linux-amd64 start -p test-preload-134834 --preload=true --alsologtostderr -v=1 --wait=true --driver=kvm2 : (48.369821307s)
preload_test.go:68: (dbg) Run:  out/minikube-linux-amd64 -p test-preload-134834 image list
helpers_test.go:176: Cleaning up "test-preload-134834" profile ...
helpers_test.go:179: (dbg) Run:  out/minikube-linux-amd64 delete -p test-preload-134834
--- PASS: TestPreload (158.90s)

                                                
                                    
x
+
TestScheduledStopUnix (116.63s)

                                                
                                                
=== RUN   TestScheduledStopUnix
scheduled_stop_test.go:128: (dbg) Run:  out/minikube-linux-amd64 start -p scheduled-stop-187076 --memory=3072 --driver=kvm2 
E1213 09:17:37.334131   13307 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22128-9390/.minikube/profiles/addons-527167/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
scheduled_stop_test.go:128: (dbg) Done: out/minikube-linux-amd64 start -p scheduled-stop-187076 --memory=3072 --driver=kvm2 : (44.901761714s)
scheduled_stop_test.go:137: (dbg) Run:  out/minikube-linux-amd64 stop -p scheduled-stop-187076 --schedule 5m -v=5 --alsologtostderr
minikube stop output:

                                                
                                                
** stderr ** 
	I1213 09:18:18.215399   38249 out.go:360] Setting OutFile to fd 1 ...
	I1213 09:18:18.215515   38249 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1213 09:18:18.215525   38249 out.go:374] Setting ErrFile to fd 2...
	I1213 09:18:18.215530   38249 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1213 09:18:18.215761   38249 root.go:338] Updating PATH: /home/jenkins/minikube-integration/22128-9390/.minikube/bin
	I1213 09:18:18.216031   38249 out.go:368] Setting JSON to false
	I1213 09:18:18.216114   38249 mustload.go:66] Loading cluster: scheduled-stop-187076
	I1213 09:18:18.216428   38249 config.go:182] Loaded profile config "scheduled-stop-187076": Driver=kvm2, ContainerRuntime=docker, KubernetesVersion=v1.34.2
	I1213 09:18:18.216491   38249 profile.go:143] Saving config to /home/jenkins/minikube-integration/22128-9390/.minikube/profiles/scheduled-stop-187076/config.json ...
	I1213 09:18:18.216668   38249 mustload.go:66] Loading cluster: scheduled-stop-187076
	I1213 09:18:18.216756   38249 config.go:182] Loaded profile config "scheduled-stop-187076": Driver=kvm2, ContainerRuntime=docker, KubernetesVersion=v1.34.2

                                                
                                                
** /stderr **
scheduled_stop_test.go:204: (dbg) Run:  out/minikube-linux-amd64 status --format={{.TimeToStop}} -p scheduled-stop-187076 -n scheduled-stop-187076
scheduled_stop_test.go:172: signal error was:  <nil>
scheduled_stop_test.go:137: (dbg) Run:  out/minikube-linux-amd64 stop -p scheduled-stop-187076 --schedule 15s -v=5 --alsologtostderr
minikube stop output:

                                                
                                                
** stderr ** 
	I1213 09:18:18.516829   38294 out.go:360] Setting OutFile to fd 1 ...
	I1213 09:18:18.516929   38294 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1213 09:18:18.516933   38294 out.go:374] Setting ErrFile to fd 2...
	I1213 09:18:18.516937   38294 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1213 09:18:18.517159   38294 root.go:338] Updating PATH: /home/jenkins/minikube-integration/22128-9390/.minikube/bin
	I1213 09:18:18.517449   38294 out.go:368] Setting JSON to false
	I1213 09:18:18.517685   38294 daemonize_unix.go:73] killing process 38283 as it is an old scheduled stop
	I1213 09:18:18.517799   38294 mustload.go:66] Loading cluster: scheduled-stop-187076
	I1213 09:18:18.518319   38294 config.go:182] Loaded profile config "scheduled-stop-187076": Driver=kvm2, ContainerRuntime=docker, KubernetesVersion=v1.34.2
	I1213 09:18:18.518440   38294 profile.go:143] Saving config to /home/jenkins/minikube-integration/22128-9390/.minikube/profiles/scheduled-stop-187076/config.json ...
	I1213 09:18:18.518709   38294 mustload.go:66] Loading cluster: scheduled-stop-187076
	I1213 09:18:18.518855   38294 config.go:182] Loaded profile config "scheduled-stop-187076": Driver=kvm2, ContainerRuntime=docker, KubernetesVersion=v1.34.2

                                                
                                                
** /stderr **
scheduled_stop_test.go:172: signal error was:  os: process already finished
I1213 09:18:18.523958   13307 retry.go:31] will retry after 109.292µs: open /home/jenkins/minikube-integration/22128-9390/.minikube/profiles/scheduled-stop-187076/pid: no such file or directory
I1213 09:18:18.525158   13307 retry.go:31] will retry after 114.687µs: open /home/jenkins/minikube-integration/22128-9390/.minikube/profiles/scheduled-stop-187076/pid: no such file or directory
I1213 09:18:18.526358   13307 retry.go:31] will retry after 173.906µs: open /home/jenkins/minikube-integration/22128-9390/.minikube/profiles/scheduled-stop-187076/pid: no such file or directory
I1213 09:18:18.527553   13307 retry.go:31] will retry after 379.221µs: open /home/jenkins/minikube-integration/22128-9390/.minikube/profiles/scheduled-stop-187076/pid: no such file or directory
I1213 09:18:18.528740   13307 retry.go:31] will retry after 682.949µs: open /home/jenkins/minikube-integration/22128-9390/.minikube/profiles/scheduled-stop-187076/pid: no such file or directory
I1213 09:18:18.529915   13307 retry.go:31] will retry after 508.089µs: open /home/jenkins/minikube-integration/22128-9390/.minikube/profiles/scheduled-stop-187076/pid: no such file or directory
I1213 09:18:18.531086   13307 retry.go:31] will retry after 1.065002ms: open /home/jenkins/minikube-integration/22128-9390/.minikube/profiles/scheduled-stop-187076/pid: no such file or directory
I1213 09:18:18.532265   13307 retry.go:31] will retry after 1.753811ms: open /home/jenkins/minikube-integration/22128-9390/.minikube/profiles/scheduled-stop-187076/pid: no such file or directory
I1213 09:18:18.534510   13307 retry.go:31] will retry after 1.702755ms: open /home/jenkins/minikube-integration/22128-9390/.minikube/profiles/scheduled-stop-187076/pid: no such file or directory
I1213 09:18:18.536776   13307 retry.go:31] will retry after 4.75489ms: open /home/jenkins/minikube-integration/22128-9390/.minikube/profiles/scheduled-stop-187076/pid: no such file or directory
I1213 09:18:18.542117   13307 retry.go:31] will retry after 8.274487ms: open /home/jenkins/minikube-integration/22128-9390/.minikube/profiles/scheduled-stop-187076/pid: no such file or directory
I1213 09:18:18.551440   13307 retry.go:31] will retry after 7.27633ms: open /home/jenkins/minikube-integration/22128-9390/.minikube/profiles/scheduled-stop-187076/pid: no such file or directory
I1213 09:18:18.559729   13307 retry.go:31] will retry after 11.730599ms: open /home/jenkins/minikube-integration/22128-9390/.minikube/profiles/scheduled-stop-187076/pid: no such file or directory
I1213 09:18:18.572114   13307 retry.go:31] will retry after 28.386141ms: open /home/jenkins/minikube-integration/22128-9390/.minikube/profiles/scheduled-stop-187076/pid: no such file or directory
I1213 09:18:18.601453   13307 retry.go:31] will retry after 27.669511ms: open /home/jenkins/minikube-integration/22128-9390/.minikube/profiles/scheduled-stop-187076/pid: no such file or directory
I1213 09:18:18.629749   13307 retry.go:31] will retry after 23.589915ms: open /home/jenkins/minikube-integration/22128-9390/.minikube/profiles/scheduled-stop-187076/pid: no such file or directory
scheduled_stop_test.go:137: (dbg) Run:  out/minikube-linux-amd64 stop -p scheduled-stop-187076 --cancel-scheduled
minikube stop output:

                                                
                                                
-- stdout --
	* All existing scheduled stops cancelled

                                                
                                                
-- /stdout --
scheduled_stop_test.go:189: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p scheduled-stop-187076 -n scheduled-stop-187076
scheduled_stop_test.go:218: (dbg) Run:  out/minikube-linux-amd64 status -p scheduled-stop-187076
scheduled_stop_test.go:137: (dbg) Run:  out/minikube-linux-amd64 stop -p scheduled-stop-187076 --schedule 15s -v=5 --alsologtostderr
minikube stop output:

                                                
                                                
** stderr ** 
	I1213 09:18:44.285126   38443 out.go:360] Setting OutFile to fd 1 ...
	I1213 09:18:44.285453   38443 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1213 09:18:44.285465   38443 out.go:374] Setting ErrFile to fd 2...
	I1213 09:18:44.285471   38443 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1213 09:18:44.285694   38443 root.go:338] Updating PATH: /home/jenkins/minikube-integration/22128-9390/.minikube/bin
	I1213 09:18:44.285968   38443 out.go:368] Setting JSON to false
	I1213 09:18:44.286068   38443 mustload.go:66] Loading cluster: scheduled-stop-187076
	I1213 09:18:44.286438   38443 config.go:182] Loaded profile config "scheduled-stop-187076": Driver=kvm2, ContainerRuntime=docker, KubernetesVersion=v1.34.2
	I1213 09:18:44.286522   38443 profile.go:143] Saving config to /home/jenkins/minikube-integration/22128-9390/.minikube/profiles/scheduled-stop-187076/config.json ...
	I1213 09:18:44.286757   38443 mustload.go:66] Loading cluster: scheduled-stop-187076
	I1213 09:18:44.286883   38443 config.go:182] Loaded profile config "scheduled-stop-187076": Driver=kvm2, ContainerRuntime=docker, KubernetesVersion=v1.34.2

                                                
                                                
** /stderr **
E1213 09:18:44.843858   13307 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22128-9390/.minikube/profiles/functional-888658/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
scheduled_stop_test.go:172: signal error was:  os: process already finished
E1213 09:19:18.852240   13307 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22128-9390/.minikube/profiles/functional-016924/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
scheduled_stop_test.go:218: (dbg) Run:  out/minikube-linux-amd64 status -p scheduled-stop-187076
scheduled_stop_test.go:218: (dbg) Non-zero exit: out/minikube-linux-amd64 status -p scheduled-stop-187076: exit status 7 (64.246794ms)

                                                
                                                
-- stdout --
	scheduled-stop-187076
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	

                                                
                                                
-- /stdout --
scheduled_stop_test.go:189: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p scheduled-stop-187076 -n scheduled-stop-187076
scheduled_stop_test.go:189: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p scheduled-stop-187076 -n scheduled-stop-187076: exit status 7 (71.728016ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
scheduled_stop_test.go:189: status error: exit status 7 (may be ok)
helpers_test.go:176: Cleaning up "scheduled-stop-187076" profile ...
helpers_test.go:179: (dbg) Run:  out/minikube-linux-amd64 delete -p scheduled-stop-187076
--- PASS: TestScheduledStopUnix (116.63s)

                                                
                                    
x
+
TestSkaffold (137.53s)

                                                
                                                
=== RUN   TestSkaffold
skaffold_test.go:59: (dbg) Run:  /tmp/skaffold.exe1868996279 version
skaffold_test.go:63: skaffold version: v2.17.0
skaffold_test.go:66: (dbg) Run:  out/minikube-linux-amd64 start -p skaffold-452054 --memory=3072 --driver=kvm2 
skaffold_test.go:66: (dbg) Done: out/minikube-linux-amd64 start -p skaffold-452054 --memory=3072 --driver=kvm2 : (46.712200657s)
skaffold_test.go:86: copying out/minikube-linux-amd64 to /home/jenkins/workspace/KVM_Linux_integration/out/minikube
skaffold_test.go:105: (dbg) Run:  /tmp/skaffold.exe1868996279 run --minikube-profile skaffold-452054 --kube-context skaffold-452054 --status-check=true --port-forward=false --interactive=false
skaffold_test.go:105: (dbg) Done: /tmp/skaffold.exe1868996279 run --minikube-profile skaffold-452054 --kube-context skaffold-452054 --status-check=true --port-forward=false --interactive=false: (1m17.801514003s)
skaffold_test.go:111: (dbg) TestSkaffold: waiting 1m0s for pods matching "app=leeroy-app" in namespace "default" ...
helpers_test.go:353: "leeroy-app-5b7c87d7f7-g7nmd" [a41d5b60-f34e-401b-94d1-7aa987839f1c] Running
skaffold_test.go:111: (dbg) TestSkaffold: app=leeroy-app healthy within 6.004423709s
skaffold_test.go:114: (dbg) TestSkaffold: waiting 1m0s for pods matching "app=leeroy-web" in namespace "default" ...
helpers_test.go:353: "leeroy-web-7d45b65dc-76ht8" [27a9f67d-79e8-4db2-b76b-10bb0f7dbaaf] Running
skaffold_test.go:114: (dbg) TestSkaffold: app=leeroy-web healthy within 5.004793561s
helpers_test.go:176: Cleaning up "skaffold-452054" profile ...
helpers_test.go:179: (dbg) Run:  out/minikube-linux-amd64 delete -p skaffold-452054
helpers_test.go:179: (dbg) Done: out/minikube-linux-amd64 delete -p skaffold-452054: (1.014148778s)
--- PASS: TestSkaffold (137.53s)

                                                
                                    
x
+
TestRunningBinaryUpgrade (399.75s)

                                                
                                                
=== RUN   TestRunningBinaryUpgrade
=== PAUSE TestRunningBinaryUpgrade

                                                
                                                

                                                
                                                
=== CONT  TestRunningBinaryUpgrade
version_upgrade_test.go:120: (dbg) Run:  /tmp/minikube-v1.35.0.2708554662 start -p running-upgrade-034033 --memory=3072 --vm-driver=kvm2 
version_upgrade_test.go:120: (dbg) Done: /tmp/minikube-v1.35.0.2708554662 start -p running-upgrade-034033 --memory=3072 --vm-driver=kvm2 : (1m29.620747502s)
version_upgrade_test.go:130: (dbg) Run:  out/minikube-linux-amd64 start -p running-upgrade-034033 --memory=3072 --alsologtostderr -v=1 --driver=kvm2 
version_upgrade_test.go:130: (dbg) Done: out/minikube-linux-amd64 start -p running-upgrade-034033 --memory=3072 --alsologtostderr -v=1 --driver=kvm2 : (5m8.623060355s)
helpers_test.go:176: Cleaning up "running-upgrade-034033" profile ...
helpers_test.go:179: (dbg) Run:  out/minikube-linux-amd64 delete -p running-upgrade-034033
--- PASS: TestRunningBinaryUpgrade (399.75s)

                                                
                                    
x
+
TestKubernetesUpgrade (268.28s)

                                                
                                                
=== RUN   TestKubernetesUpgrade
=== PAUSE TestKubernetesUpgrade

                                                
                                                

                                                
                                                
=== CONT  TestKubernetesUpgrade
version_upgrade_test.go:222: (dbg) Run:  out/minikube-linux-amd64 start -p kubernetes-upgrade-100467 --memory=3072 --kubernetes-version=v1.28.0 --alsologtostderr -v=1 --driver=kvm2 
E1213 09:22:37.332191   13307 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22128-9390/.minikube/profiles/addons-527167/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
version_upgrade_test.go:222: (dbg) Done: out/minikube-linux-amd64 start -p kubernetes-upgrade-100467 --memory=3072 --kubernetes-version=v1.28.0 --alsologtostderr -v=1 --driver=kvm2 : (1m49.037630892s)
version_upgrade_test.go:227: (dbg) Run:  out/minikube-linux-amd64 stop -p kubernetes-upgrade-100467
version_upgrade_test.go:227: (dbg) Done: out/minikube-linux-amd64 stop -p kubernetes-upgrade-100467: (2.550110402s)
version_upgrade_test.go:232: (dbg) Run:  out/minikube-linux-amd64 -p kubernetes-upgrade-100467 status --format={{.Host}}
version_upgrade_test.go:232: (dbg) Non-zero exit: out/minikube-linux-amd64 -p kubernetes-upgrade-100467 status --format={{.Host}}: exit status 7 (84.500644ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
version_upgrade_test.go:234: status error: exit status 7 (may be ok)
version_upgrade_test.go:243: (dbg) Run:  out/minikube-linux-amd64 start -p kubernetes-upgrade-100467 --memory=3072 --kubernetes-version=v1.35.0-beta.0 --alsologtostderr -v=1 --driver=kvm2 
E1213 09:23:44.843785   13307 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22128-9390/.minikube/profiles/functional-888658/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
version_upgrade_test.go:243: (dbg) Done: out/minikube-linux-amd64 start -p kubernetes-upgrade-100467 --memory=3072 --kubernetes-version=v1.35.0-beta.0 --alsologtostderr -v=1 --driver=kvm2 : (54.308645158s)
version_upgrade_test.go:248: (dbg) Run:  kubectl --context kubernetes-upgrade-100467 version --output=json
version_upgrade_test.go:267: Attempting to downgrade Kubernetes (should fail)
version_upgrade_test.go:269: (dbg) Run:  out/minikube-linux-amd64 start -p kubernetes-upgrade-100467 --memory=3072 --kubernetes-version=v1.28.0 --driver=kvm2 
version_upgrade_test.go:269: (dbg) Non-zero exit: out/minikube-linux-amd64 start -p kubernetes-upgrade-100467 --memory=3072 --kubernetes-version=v1.28.0 --driver=kvm2 : exit status 106 (82.375315ms)

                                                
                                                
-- stdout --
	* [kubernetes-upgrade-100467] minikube v1.37.0 on Ubuntu 22.04 (kvm/amd64)
	  - MINIKUBE_LOCATION=22128
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/22128-9390/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/22128-9390/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-amd64
	  - MINIKUBE_FORCE_SYSTEMD=
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to K8S_DOWNGRADE_UNSUPPORTED: Unable to safely downgrade existing Kubernetes v1.35.0-beta.0 cluster to v1.28.0
	* Suggestion: 
	
	    1) Recreate the cluster with Kubernetes 1.28.0, by running:
	    
	    minikube delete -p kubernetes-upgrade-100467
	    minikube start -p kubernetes-upgrade-100467 --kubernetes-version=v1.28.0
	    
	    2) Create a second cluster with Kubernetes 1.28.0, by running:
	    
	    minikube start -p kubernetes-upgrade-1004672 --kubernetes-version=v1.28.0
	    
	    3) Use the existing cluster at version Kubernetes 1.35.0-beta.0, by running:
	    
	    minikube start -p kubernetes-upgrade-100467 --kubernetes-version=v1.35.0-beta.0
	    

                                                
                                                
** /stderr **
version_upgrade_test.go:273: Attempting restart after unsuccessful downgrade
version_upgrade_test.go:275: (dbg) Run:  out/minikube-linux-amd64 start -p kubernetes-upgrade-100467 --memory=3072 --kubernetes-version=v1.35.0-beta.0 --alsologtostderr -v=1 --driver=kvm2 
version_upgrade_test.go:275: (dbg) Done: out/minikube-linux-amd64 start -p kubernetes-upgrade-100467 --memory=3072 --kubernetes-version=v1.35.0-beta.0 --alsologtostderr -v=1 --driver=kvm2 : (1m41.021542583s)
helpers_test.go:176: Cleaning up "kubernetes-upgrade-100467" profile ...
helpers_test.go:179: (dbg) Run:  out/minikube-linux-amd64 delete -p kubernetes-upgrade-100467
helpers_test.go:179: (dbg) Done: out/minikube-linux-amd64 delete -p kubernetes-upgrade-100467: (1.135956694s)
--- PASS: TestKubernetesUpgrade (268.28s)

                                                
                                    
x
+
TestPause/serial/Start (89.01s)

                                                
                                                
=== RUN   TestPause/serial/Start
pause_test.go:80: (dbg) Run:  out/minikube-linux-amd64 start -p pause-032319 --memory=3072 --install-addons=false --wait=all --driver=kvm2 
pause_test.go:80: (dbg) Done: out/minikube-linux-amd64 start -p pause-032319 --memory=3072 --install-addons=false --wait=all --driver=kvm2 : (1m29.005402504s)
--- PASS: TestPause/serial/Start (89.01s)

                                                
                                    
x
+
TestNoKubernetes/serial/StartNoK8sWithVersion (0.1s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/StartNoK8sWithVersion
no_kubernetes_test.go:108: (dbg) Run:  out/minikube-linux-amd64 start -p NoKubernetes-244728 --no-kubernetes --kubernetes-version=v1.28.0 --driver=kvm2 
no_kubernetes_test.go:108: (dbg) Non-zero exit: out/minikube-linux-amd64 start -p NoKubernetes-244728 --no-kubernetes --kubernetes-version=v1.28.0 --driver=kvm2 : exit status 14 (101.510506ms)

                                                
                                                
-- stdout --
	* [NoKubernetes-244728] minikube v1.37.0 on Ubuntu 22.04 (kvm/amd64)
	  - MINIKUBE_LOCATION=22128
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/22128-9390/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/22128-9390/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-amd64
	  - MINIKUBE_FORCE_SYSTEMD=
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to MK_USAGE: cannot specify --kubernetes-version with --no-kubernetes,
	to unset a global config run:
	
	$ minikube config unset kubernetes-version

                                                
                                                
** /stderr **
--- PASS: TestNoKubernetes/serial/StartNoK8sWithVersion (0.10s)

                                                
                                    
x
+
TestNoKubernetes/serial/StartWithK8s (92.39s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/StartWithK8s
no_kubernetes_test.go:120: (dbg) Run:  out/minikube-linux-amd64 start -p NoKubernetes-244728 --memory=3072 --alsologtostderr -v=5 --driver=kvm2 
E1213 09:21:47.911435   13307 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22128-9390/.minikube/profiles/functional-888658/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
no_kubernetes_test.go:120: (dbg) Done: out/minikube-linux-amd64 start -p NoKubernetes-244728 --memory=3072 --alsologtostderr -v=5 --driver=kvm2 : (1m32.091864571s)
no_kubernetes_test.go:225: (dbg) Run:  out/minikube-linux-amd64 -p NoKubernetes-244728 status -o json
--- PASS: TestNoKubernetes/serial/StartWithK8s (92.39s)

                                                
                                    
x
+
TestPause/serial/SecondStartNoReconfiguration (63.01s)

                                                
                                                
=== RUN   TestPause/serial/SecondStartNoReconfiguration
pause_test.go:92: (dbg) Run:  out/minikube-linux-amd64 start -p pause-032319 --alsologtostderr -v=1 --driver=kvm2 
pause_test.go:92: (dbg) Done: out/minikube-linux-amd64 start -p pause-032319 --alsologtostderr -v=1 --driver=kvm2 : (1m2.985529449s)
--- PASS: TestPause/serial/SecondStartNoReconfiguration (63.01s)

                                                
                                    
x
+
TestNoKubernetes/serial/StartWithStopK8s (16.15s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/StartWithStopK8s
no_kubernetes_test.go:137: (dbg) Run:  out/minikube-linux-amd64 start -p NoKubernetes-244728 --no-kubernetes --memory=3072 --alsologtostderr -v=5 --driver=kvm2 
no_kubernetes_test.go:137: (dbg) Done: out/minikube-linux-amd64 start -p NoKubernetes-244728 --no-kubernetes --memory=3072 --alsologtostderr -v=5 --driver=kvm2 : (14.97911744s)
no_kubernetes_test.go:225: (dbg) Run:  out/minikube-linux-amd64 -p NoKubernetes-244728 status -o json
no_kubernetes_test.go:225: (dbg) Non-zero exit: out/minikube-linux-amd64 -p NoKubernetes-244728 status -o json: exit status 2 (228.070938ms)

                                                
                                                
-- stdout --
	{"Name":"NoKubernetes-244728","Host":"Running","Kubelet":"Stopped","APIServer":"Stopped","Kubeconfig":"Configured","Worker":false}

                                                
                                                
-- /stdout --
no_kubernetes_test.go:149: (dbg) Run:  out/minikube-linux-amd64 delete -p NoKubernetes-244728
--- PASS: TestNoKubernetes/serial/StartWithStopK8s (16.15s)

                                                
                                    
x
+
TestNoKubernetes/serial/Start (24.75s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/Start
no_kubernetes_test.go:161: (dbg) Run:  out/minikube-linux-amd64 start -p NoKubernetes-244728 --no-kubernetes --memory=3072 --alsologtostderr -v=5 --driver=kvm2 
no_kubernetes_test.go:161: (dbg) Done: out/minikube-linux-amd64 start -p NoKubernetes-244728 --no-kubernetes --memory=3072 --alsologtostderr -v=5 --driver=kvm2 : (24.75344462s)
--- PASS: TestNoKubernetes/serial/Start (24.75s)

                                                
                                    
x
+
TestStoppedBinaryUpgrade/Setup (0.68s)

                                                
                                                
=== RUN   TestStoppedBinaryUpgrade/Setup
--- PASS: TestStoppedBinaryUpgrade/Setup (0.68s)

                                                
                                    
x
+
TestStoppedBinaryUpgrade/Upgrade (164.69s)

                                                
                                                
=== RUN   TestStoppedBinaryUpgrade/Upgrade
version_upgrade_test.go:183: (dbg) Run:  /tmp/minikube-v1.35.0.2946150672 start -p stopped-upgrade-126890 --memory=3072 --vm-driver=kvm2 
version_upgrade_test.go:183: (dbg) Done: /tmp/minikube-v1.35.0.2946150672 start -p stopped-upgrade-126890 --memory=3072 --vm-driver=kvm2 : (1m29.017508507s)
version_upgrade_test.go:192: (dbg) Run:  /tmp/minikube-v1.35.0.2946150672 -p stopped-upgrade-126890 stop
version_upgrade_test.go:192: (dbg) Done: /tmp/minikube-v1.35.0.2946150672 -p stopped-upgrade-126890 stop: (13.865489233s)
version_upgrade_test.go:198: (dbg) Run:  out/minikube-linux-amd64 start -p stopped-upgrade-126890 --memory=3072 --alsologtostderr -v=1 --driver=kvm2 
version_upgrade_test.go:198: (dbg) Done: out/minikube-linux-amd64 start -p stopped-upgrade-126890 --memory=3072 --alsologtostderr -v=1 --driver=kvm2 : (1m1.806390815s)
--- PASS: TestStoppedBinaryUpgrade/Upgrade (164.69s)

                                                
                                    
x
+
TestNoKubernetes/serial/VerifyNok8sNoK8sDownloads (0s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/VerifyNok8sNoK8sDownloads
no_kubernetes_test.go:89: Checking cache directory: /home/jenkins/minikube-integration/22128-9390/.minikube/cache/linux/amd64/v0.0.0
--- PASS: TestNoKubernetes/serial/VerifyNok8sNoK8sDownloads (0.00s)

                                                
                                    
x
+
TestNoKubernetes/serial/VerifyK8sNotRunning (0.19s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/VerifyK8sNotRunning
no_kubernetes_test.go:172: (dbg) Run:  out/minikube-linux-amd64 ssh -p NoKubernetes-244728 "sudo systemctl is-active --quiet service kubelet"
no_kubernetes_test.go:172: (dbg) Non-zero exit: out/minikube-linux-amd64 ssh -p NoKubernetes-244728 "sudo systemctl is-active --quiet service kubelet": exit status 1 (186.696559ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 4

                                                
                                                
** /stderr **
--- PASS: TestNoKubernetes/serial/VerifyK8sNotRunning (0.19s)

                                                
                                    
x
+
TestNoKubernetes/serial/ProfileList (8.29s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/ProfileList
no_kubernetes_test.go:194: (dbg) Run:  out/minikube-linux-amd64 profile list
no_kubernetes_test.go:194: (dbg) Done: out/minikube-linux-amd64 profile list: (2.509159021s)
no_kubernetes_test.go:204: (dbg) Run:  out/minikube-linux-amd64 profile list --output=json
no_kubernetes_test.go:204: (dbg) Done: out/minikube-linux-amd64 profile list --output=json: (5.776495621s)
--- PASS: TestNoKubernetes/serial/ProfileList (8.29s)

                                                
                                    
x
+
TestNoKubernetes/serial/Stop (1.53s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/Stop
no_kubernetes_test.go:183: (dbg) Run:  out/minikube-linux-amd64 stop -p NoKubernetes-244728
no_kubernetes_test.go:183: (dbg) Done: out/minikube-linux-amd64 stop -p NoKubernetes-244728: (1.529905155s)
--- PASS: TestNoKubernetes/serial/Stop (1.53s)

                                                
                                    
x
+
TestNoKubernetes/serial/StartNoArgs (53.49s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/StartNoArgs
no_kubernetes_test.go:216: (dbg) Run:  out/minikube-linux-amd64 start -p NoKubernetes-244728 --driver=kvm2 
E1213 09:24:18.852433   13307 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22128-9390/.minikube/profiles/functional-016924/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
no_kubernetes_test.go:216: (dbg) Done: out/minikube-linux-amd64 start -p NoKubernetes-244728 --driver=kvm2 : (53.493138115s)
--- PASS: TestNoKubernetes/serial/StartNoArgs (53.49s)

                                                
                                    
x
+
TestPause/serial/Pause (0.75s)

                                                
                                                
=== RUN   TestPause/serial/Pause
pause_test.go:110: (dbg) Run:  out/minikube-linux-amd64 pause -p pause-032319 --alsologtostderr -v=5
--- PASS: TestPause/serial/Pause (0.75s)

                                                
                                    
x
+
TestPause/serial/VerifyStatus (0.26s)

                                                
                                                
=== RUN   TestPause/serial/VerifyStatus
status_test.go:76: (dbg) Run:  out/minikube-linux-amd64 status -p pause-032319 --output=json --layout=cluster
status_test.go:76: (dbg) Non-zero exit: out/minikube-linux-amd64 status -p pause-032319 --output=json --layout=cluster: exit status 2 (258.192729ms)

                                                
                                                
-- stdout --
	{"Name":"pause-032319","StatusCode":418,"StatusName":"Paused","Step":"Done","StepDetail":"* Paused 12 containers in: kube-system, kubernetes-dashboard, istio-operator","BinaryVersion":"v1.37.0","Components":{"kubeconfig":{"Name":"kubeconfig","StatusCode":200,"StatusName":"OK"}},"Nodes":[{"Name":"pause-032319","StatusCode":200,"StatusName":"OK","Components":{"apiserver":{"Name":"apiserver","StatusCode":418,"StatusName":"Paused"},"kubelet":{"Name":"kubelet","StatusCode":405,"StatusName":"Stopped"}}}]}

                                                
                                                
-- /stdout --
--- PASS: TestPause/serial/VerifyStatus (0.26s)

                                                
                                    
x
+
TestPause/serial/Unpause (0.76s)

                                                
                                                
=== RUN   TestPause/serial/Unpause
pause_test.go:121: (dbg) Run:  out/minikube-linux-amd64 unpause -p pause-032319 --alsologtostderr -v=5
--- PASS: TestPause/serial/Unpause (0.76s)

                                                
                                    
x
+
TestPause/serial/PauseAgain (1.17s)

                                                
                                                
=== RUN   TestPause/serial/PauseAgain
pause_test.go:110: (dbg) Run:  out/minikube-linux-amd64 pause -p pause-032319 --alsologtostderr -v=5
pause_test.go:110: (dbg) Done: out/minikube-linux-amd64 pause -p pause-032319 --alsologtostderr -v=5: (1.17434098s)
--- PASS: TestPause/serial/PauseAgain (1.17s)

                                                
                                    
x
+
TestPause/serial/DeletePaused (0.91s)

                                                
                                                
=== RUN   TestPause/serial/DeletePaused
pause_test.go:132: (dbg) Run:  out/minikube-linux-amd64 delete -p pause-032319 --alsologtostderr -v=5
--- PASS: TestPause/serial/DeletePaused (0.91s)

                                                
                                    
x
+
TestPause/serial/VerifyDeletedResources (0.32s)

                                                
                                                
=== RUN   TestPause/serial/VerifyDeletedResources
pause_test.go:142: (dbg) Run:  out/minikube-linux-amd64 profile list --output json
--- PASS: TestPause/serial/VerifyDeletedResources (0.32s)

                                                
                                    
x
+
TestNoKubernetes/serial/VerifyK8sNotRunningSecond (0.2s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/VerifyK8sNotRunningSecond
no_kubernetes_test.go:172: (dbg) Run:  out/minikube-linux-amd64 ssh -p NoKubernetes-244728 "sudo systemctl is-active --quiet service kubelet"
no_kubernetes_test.go:172: (dbg) Non-zero exit: out/minikube-linux-amd64 ssh -p NoKubernetes-244728 "sudo systemctl is-active --quiet service kubelet": exit status 1 (195.807913ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 4

                                                
                                                
** /stderr **
--- PASS: TestNoKubernetes/serial/VerifyK8sNotRunningSecond (0.20s)

                                                
                                    
x
+
TestStoppedBinaryUpgrade/MinikubeLogs (1.35s)

                                                
                                                
=== RUN   TestStoppedBinaryUpgrade/MinikubeLogs
version_upgrade_test.go:206: (dbg) Run:  out/minikube-linux-amd64 logs -p stopped-upgrade-126890
version_upgrade_test.go:206: (dbg) Done: out/minikube-linux-amd64 logs -p stopped-upgrade-126890: (1.347381066s)
--- PASS: TestStoppedBinaryUpgrade/MinikubeLogs (1.35s)

                                                
                                    
x
+
TestISOImage/Setup (61.65s)

                                                
                                                
=== RUN   TestISOImage/Setup
iso_test.go:47: (dbg) Run:  out/minikube-linux-amd64 start -p guest-566372 --no-kubernetes --driver=kvm2 
E1213 09:26:35.363116   13307 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22128-9390/.minikube/profiles/skaffold-452054/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1213 09:26:35.369635   13307 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22128-9390/.minikube/profiles/skaffold-452054/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1213 09:26:35.381105   13307 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22128-9390/.minikube/profiles/skaffold-452054/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1213 09:26:35.402651   13307 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22128-9390/.minikube/profiles/skaffold-452054/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1213 09:26:35.444167   13307 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22128-9390/.minikube/profiles/skaffold-452054/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1213 09:26:35.525641   13307 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22128-9390/.minikube/profiles/skaffold-452054/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1213 09:26:35.687252   13307 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22128-9390/.minikube/profiles/skaffold-452054/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1213 09:26:36.008979   13307 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22128-9390/.minikube/profiles/skaffold-452054/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1213 09:26:36.651189   13307 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22128-9390/.minikube/profiles/skaffold-452054/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1213 09:26:37.933187   13307 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22128-9390/.minikube/profiles/skaffold-452054/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1213 09:26:40.494782   13307 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22128-9390/.minikube/profiles/skaffold-452054/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1213 09:26:45.616379   13307 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22128-9390/.minikube/profiles/skaffold-452054/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1213 09:26:55.858445   13307 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22128-9390/.minikube/profiles/skaffold-452054/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
iso_test.go:47: (dbg) Done: out/minikube-linux-amd64 start -p guest-566372 --no-kubernetes --driver=kvm2 : (1m1.648209019s)
--- PASS: TestISOImage/Setup (61.65s)

                                                
                                    
x
+
TestISOImage/Binaries/crictl (0.17s)

                                                
                                                
=== RUN   TestISOImage/Binaries/crictl
=== PAUSE TestISOImage/Binaries/crictl

                                                
                                                

                                                
                                                
=== CONT  TestISOImage/Binaries/crictl
iso_test.go:76: (dbg) Run:  out/minikube-linux-amd64 -p guest-566372 ssh "which crictl"
--- PASS: TestISOImage/Binaries/crictl (0.17s)

                                                
                                    
x
+
TestISOImage/Binaries/curl (0.18s)

                                                
                                                
=== RUN   TestISOImage/Binaries/curl
=== PAUSE TestISOImage/Binaries/curl

                                                
                                                

                                                
                                                
=== CONT  TestISOImage/Binaries/curl
iso_test.go:76: (dbg) Run:  out/minikube-linux-amd64 -p guest-566372 ssh "which curl"
--- PASS: TestISOImage/Binaries/curl (0.18s)

                                                
                                    
x
+
TestISOImage/Binaries/docker (0.18s)

                                                
                                                
=== RUN   TestISOImage/Binaries/docker
=== PAUSE TestISOImage/Binaries/docker

                                                
                                                

                                                
                                                
=== CONT  TestISOImage/Binaries/docker
iso_test.go:76: (dbg) Run:  out/minikube-linux-amd64 -p guest-566372 ssh "which docker"
--- PASS: TestISOImage/Binaries/docker (0.18s)

                                                
                                    
x
+
TestISOImage/Binaries/git (0.18s)

                                                
                                                
=== RUN   TestISOImage/Binaries/git
=== PAUSE TestISOImage/Binaries/git

                                                
                                                

                                                
                                                
=== CONT  TestISOImage/Binaries/git
iso_test.go:76: (dbg) Run:  out/minikube-linux-amd64 -p guest-566372 ssh "which git"
--- PASS: TestISOImage/Binaries/git (0.18s)

                                                
                                    
x
+
TestISOImage/Binaries/iptables (0.17s)

                                                
                                                
=== RUN   TestISOImage/Binaries/iptables
=== PAUSE TestISOImage/Binaries/iptables

                                                
                                                

                                                
                                                
=== CONT  TestISOImage/Binaries/iptables
iso_test.go:76: (dbg) Run:  out/minikube-linux-amd64 -p guest-566372 ssh "which iptables"
--- PASS: TestISOImage/Binaries/iptables (0.17s)

                                                
                                    
x
+
TestISOImage/Binaries/podman (0.18s)

                                                
                                                
=== RUN   TestISOImage/Binaries/podman
=== PAUSE TestISOImage/Binaries/podman

                                                
                                                

                                                
                                                
=== CONT  TestISOImage/Binaries/podman
iso_test.go:76: (dbg) Run:  out/minikube-linux-amd64 -p guest-566372 ssh "which podman"
--- PASS: TestISOImage/Binaries/podman (0.18s)

                                                
                                    
x
+
TestISOImage/Binaries/rsync (0.17s)

                                                
                                                
=== RUN   TestISOImage/Binaries/rsync
=== PAUSE TestISOImage/Binaries/rsync

                                                
                                                

                                                
                                                
=== CONT  TestISOImage/Binaries/rsync
iso_test.go:76: (dbg) Run:  out/minikube-linux-amd64 -p guest-566372 ssh "which rsync"
--- PASS: TestISOImage/Binaries/rsync (0.17s)

                                                
                                    
x
+
TestISOImage/Binaries/socat (0.17s)

                                                
                                                
=== RUN   TestISOImage/Binaries/socat
=== PAUSE TestISOImage/Binaries/socat

                                                
                                                

                                                
                                                
=== CONT  TestISOImage/Binaries/socat
iso_test.go:76: (dbg) Run:  out/minikube-linux-amd64 -p guest-566372 ssh "which socat"
--- PASS: TestISOImage/Binaries/socat (0.17s)

                                                
                                    
x
+
TestISOImage/Binaries/wget (0.17s)

                                                
                                                
=== RUN   TestISOImage/Binaries/wget
=== PAUSE TestISOImage/Binaries/wget

                                                
                                                

                                                
                                                
=== CONT  TestISOImage/Binaries/wget
iso_test.go:76: (dbg) Run:  out/minikube-linux-amd64 -p guest-566372 ssh "which wget"
--- PASS: TestISOImage/Binaries/wget (0.17s)

                                                
                                    
x
+
TestISOImage/Binaries/VBoxControl (0.17s)

                                                
                                                
=== RUN   TestISOImage/Binaries/VBoxControl
=== PAUSE TestISOImage/Binaries/VBoxControl

                                                
                                                

                                                
                                                
=== CONT  TestISOImage/Binaries/VBoxControl
iso_test.go:76: (dbg) Run:  out/minikube-linux-amd64 -p guest-566372 ssh "which VBoxControl"
--- PASS: TestISOImage/Binaries/VBoxControl (0.17s)

                                                
                                    
x
+
TestISOImage/Binaries/VBoxService (0.17s)

                                                
                                                
=== RUN   TestISOImage/Binaries/VBoxService
=== PAUSE TestISOImage/Binaries/VBoxService

                                                
                                                

                                                
                                                
=== CONT  TestISOImage/Binaries/VBoxService
iso_test.go:76: (dbg) Run:  out/minikube-linux-amd64 -p guest-566372 ssh "which VBoxService"
--- PASS: TestISOImage/Binaries/VBoxService (0.17s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/FirstStart (105.1s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/FirstStart
start_stop_delete_test.go:184: (dbg) Run:  out/minikube-linux-amd64 start -p old-k8s-version-394980 --memory=3072 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=kvm2  --kubernetes-version=v1.28.0
start_stop_delete_test.go:184: (dbg) Done: out/minikube-linux-amd64 start -p old-k8s-version-394980 --memory=3072 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=kvm2  --kubernetes-version=v1.28.0: (1m45.097536506s)
--- PASS: TestStartStop/group/old-k8s-version/serial/FirstStart (105.10s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/FirstStart (101.03s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/FirstStart
start_stop_delete_test.go:184: (dbg) Run:  out/minikube-linux-amd64 start -p no-preload-616969 --memory=3072 --alsologtostderr --wait=true --preload=false --driver=kvm2  --kubernetes-version=v1.35.0-beta.0
start_stop_delete_test.go:184: (dbg) Done: out/minikube-linux-amd64 start -p no-preload-616969 --memory=3072 --alsologtostderr --wait=true --preload=false --driver=kvm2  --kubernetes-version=v1.35.0-beta.0: (1m41.028529333s)
--- PASS: TestStartStop/group/no-preload/serial/FirstStart (101.03s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/FirstStart (92.87s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/FirstStart
start_stop_delete_test.go:184: (dbg) Run:  out/minikube-linux-amd64 start -p embed-certs-594077 --memory=3072 --alsologtostderr --wait=true --embed-certs --driver=kvm2  --kubernetes-version=v1.34.2
start_stop_delete_test.go:184: (dbg) Done: out/minikube-linux-amd64 start -p embed-certs-594077 --memory=3072 --alsologtostderr --wait=true --embed-certs --driver=kvm2  --kubernetes-version=v1.34.2: (1m32.868780589s)
--- PASS: TestStartStop/group/embed-certs/serial/FirstStart (92.87s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/DeployApp (9.46s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/DeployApp
start_stop_delete_test.go:194: (dbg) Run:  kubectl --context old-k8s-version-394980 create -f testdata/busybox.yaml
start_stop_delete_test.go:194: (dbg) TestStartStop/group/old-k8s-version/serial/DeployApp: waiting 8m0s for pods matching "integration-test=busybox" in namespace "default" ...
helpers_test.go:353: "busybox" [7ddf14b2-3d87-469d-84f3-4c7e8e242c15] Pending
helpers_test.go:353: "busybox" [7ddf14b2-3d87-469d-84f3-4c7e8e242c15] Pending / Ready:ContainersNotReady (containers with unready status: [busybox]) / ContainersReady:ContainersNotReady (containers with unready status: [busybox])
helpers_test.go:353: "busybox" [7ddf14b2-3d87-469d-84f3-4c7e8e242c15] Running
start_stop_delete_test.go:194: (dbg) TestStartStop/group/old-k8s-version/serial/DeployApp: integration-test=busybox healthy within 9.006393254s
start_stop_delete_test.go:194: (dbg) Run:  kubectl --context old-k8s-version-394980 exec busybox -- /bin/sh -c "ulimit -n"
--- PASS: TestStartStop/group/old-k8s-version/serial/DeployApp (9.46s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/EnableAddonWhileActive (1.31s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/EnableAddonWhileActive
start_stop_delete_test.go:203: (dbg) Run:  out/minikube-linux-amd64 addons enable metrics-server -p old-k8s-version-394980 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain
start_stop_delete_test.go:203: (dbg) Done: out/minikube-linux-amd64 addons enable metrics-server -p old-k8s-version-394980 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain: (1.221235988s)
start_stop_delete_test.go:213: (dbg) Run:  kubectl --context old-k8s-version-394980 describe deploy/metrics-server -n kube-system
--- PASS: TestStartStop/group/old-k8s-version/serial/EnableAddonWhileActive (1.31s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/Stop (13.93s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/Stop
start_stop_delete_test.go:226: (dbg) Run:  out/minikube-linux-amd64 stop -p old-k8s-version-394980 --alsologtostderr -v=3
start_stop_delete_test.go:226: (dbg) Done: out/minikube-linux-amd64 stop -p old-k8s-version-394980 --alsologtostderr -v=3: (13.932621859s)
--- PASS: TestStartStop/group/old-k8s-version/serial/Stop (13.93s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/DeployApp (9.48s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/DeployApp
start_stop_delete_test.go:194: (dbg) Run:  kubectl --context no-preload-616969 create -f testdata/busybox.yaml
start_stop_delete_test.go:194: (dbg) TestStartStop/group/no-preload/serial/DeployApp: waiting 8m0s for pods matching "integration-test=busybox" in namespace "default" ...
helpers_test.go:353: "busybox" [2aaa7ac3-dc3a-49ec-857d-dc5277f32880] Pending
helpers_test.go:353: "busybox" [2aaa7ac3-dc3a-49ec-857d-dc5277f32880] Pending / Ready:ContainersNotReady (containers with unready status: [busybox]) / ContainersReady:ContainersNotReady (containers with unready status: [busybox])
helpers_test.go:353: "busybox" [2aaa7ac3-dc3a-49ec-857d-dc5277f32880] Running
start_stop_delete_test.go:194: (dbg) TestStartStop/group/no-preload/serial/DeployApp: integration-test=busybox healthy within 9.008564537s
start_stop_delete_test.go:194: (dbg) Run:  kubectl --context no-preload-616969 exec busybox -- /bin/sh -c "ulimit -n"
--- PASS: TestStartStop/group/no-preload/serial/DeployApp (9.48s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/EnableAddonAfterStop (0.39s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/EnableAddonAfterStop
start_stop_delete_test.go:237: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p old-k8s-version-394980 -n old-k8s-version-394980
start_stop_delete_test.go:237: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p old-k8s-version-394980 -n old-k8s-version-394980: exit status 7 (71.310273ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:237: status error: exit status 7 (may be ok)
start_stop_delete_test.go:244: (dbg) Run:  out/minikube-linux-amd64 addons enable dashboard -p old-k8s-version-394980 --images=MetricsScraper=registry.k8s.io/echoserver:1.4
--- PASS: TestStartStop/group/old-k8s-version/serial/EnableAddonAfterStop (0.39s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/SecondStart (50.48s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/SecondStart
start_stop_delete_test.go:254: (dbg) Run:  out/minikube-linux-amd64 start -p old-k8s-version-394980 --memory=3072 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=kvm2  --kubernetes-version=v1.28.0
start_stop_delete_test.go:254: (dbg) Done: out/minikube-linux-amd64 start -p old-k8s-version-394980 --memory=3072 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=kvm2  --kubernetes-version=v1.28.0: (50.013251921s)
start_stop_delete_test.go:260: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p old-k8s-version-394980 -n old-k8s-version-394980
--- PASS: TestStartStop/group/old-k8s-version/serial/SecondStart (50.48s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/EnableAddonWhileActive (1.28s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/EnableAddonWhileActive
start_stop_delete_test.go:203: (dbg) Run:  out/minikube-linux-amd64 addons enable metrics-server -p no-preload-616969 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain
start_stop_delete_test.go:203: (dbg) Done: out/minikube-linux-amd64 addons enable metrics-server -p no-preload-616969 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain: (1.158829318s)
start_stop_delete_test.go:213: (dbg) Run:  kubectl --context no-preload-616969 describe deploy/metrics-server -n kube-system
--- PASS: TestStartStop/group/no-preload/serial/EnableAddonWhileActive (1.28s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/Stop (14.54s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/Stop
start_stop_delete_test.go:226: (dbg) Run:  out/minikube-linux-amd64 stop -p no-preload-616969 --alsologtostderr -v=3
E1213 09:31:35.364003   13307 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22128-9390/.minikube/profiles/skaffold-452054/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
start_stop_delete_test.go:226: (dbg) Done: out/minikube-linux-amd64 stop -p no-preload-616969 --alsologtostderr -v=3: (14.541122956s)
--- PASS: TestStartStop/group/no-preload/serial/Stop (14.54s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/EnableAddonAfterStop (0.18s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/EnableAddonAfterStop
start_stop_delete_test.go:237: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p no-preload-616969 -n no-preload-616969
start_stop_delete_test.go:237: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p no-preload-616969 -n no-preload-616969: exit status 7 (75.630961ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:237: status error: exit status 7 (may be ok)
start_stop_delete_test.go:244: (dbg) Run:  out/minikube-linux-amd64 addons enable dashboard -p no-preload-616969 --images=MetricsScraper=registry.k8s.io/echoserver:1.4
--- PASS: TestStartStop/group/no-preload/serial/EnableAddonAfterStop (0.18s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/SecondStart (55.99s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/SecondStart
start_stop_delete_test.go:254: (dbg) Run:  out/minikube-linux-amd64 start -p no-preload-616969 --memory=3072 --alsologtostderr --wait=true --preload=false --driver=kvm2  --kubernetes-version=v1.35.0-beta.0
E1213 09:32:03.064901   13307 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22128-9390/.minikube/profiles/skaffold-452054/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
start_stop_delete_test.go:254: (dbg) Done: out/minikube-linux-amd64 start -p no-preload-616969 --memory=3072 --alsologtostderr --wait=true --preload=false --driver=kvm2  --kubernetes-version=v1.35.0-beta.0: (55.633699451s)
start_stop_delete_test.go:260: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p no-preload-616969 -n no-preload-616969
--- PASS: TestStartStop/group/no-preload/serial/SecondStart (55.99s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop (17.01s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop
start_stop_delete_test.go:272: (dbg) TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:353: "kubernetes-dashboard-8694d4445c-fzc2v" [fa9bfdda-89c0-4e5b-a19e-559eb5c9323d] Pending / Ready:ContainersNotReady (containers with unready status: [kubernetes-dashboard]) / ContainersReady:ContainersNotReady (containers with unready status: [kubernetes-dashboard])
helpers_test.go:353: "kubernetes-dashboard-8694d4445c-fzc2v" [fa9bfdda-89c0-4e5b-a19e-559eb5c9323d] Running
start_stop_delete_test.go:272: (dbg) TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 17.005653514s
--- PASS: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop (17.01s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/FirstStart (57.91s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/FirstStart
start_stop_delete_test.go:184: (dbg) Run:  out/minikube-linux-amd64 start -p newest-cni-719997 --memory=3072 --alsologtostderr --wait=apiserver,system_pods,default_sa --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=kvm2  --kubernetes-version=v1.35.0-beta.0
start_stop_delete_test.go:184: (dbg) Done: out/minikube-linux-amd64 start -p newest-cni-719997 --memory=3072 --alsologtostderr --wait=apiserver,system_pods,default_sa --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=kvm2  --kubernetes-version=v1.35.0-beta.0: (57.908389591s)
--- PASS: TestStartStop/group/newest-cni/serial/FirstStart (57.91s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/DeployApp (10.45s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/DeployApp
start_stop_delete_test.go:194: (dbg) Run:  kubectl --context embed-certs-594077 create -f testdata/busybox.yaml
start_stop_delete_test.go:194: (dbg) TestStartStop/group/embed-certs/serial/DeployApp: waiting 8m0s for pods matching "integration-test=busybox" in namespace "default" ...
helpers_test.go:353: "busybox" [cdc04b54-532c-478b-8fb8-8c3741dfbb4a] Pending
E1213 09:32:37.332194   13307 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22128-9390/.minikube/profiles/addons-527167/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1213 09:32:37.335754   13307 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22128-9390/.minikube/profiles/gvisor-751442/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1213 09:32:37.342217   13307 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22128-9390/.minikube/profiles/gvisor-751442/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1213 09:32:37.353765   13307 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22128-9390/.minikube/profiles/gvisor-751442/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1213 09:32:37.375332   13307 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22128-9390/.minikube/profiles/gvisor-751442/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1213 09:32:37.416693   13307 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22128-9390/.minikube/profiles/gvisor-751442/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1213 09:32:37.498252   13307 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22128-9390/.minikube/profiles/gvisor-751442/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
helpers_test.go:353: "busybox" [cdc04b54-532c-478b-8fb8-8c3741dfbb4a] Pending / Ready:ContainersNotReady (containers with unready status: [busybox]) / ContainersReady:ContainersNotReady (containers with unready status: [busybox])
E1213 09:32:37.660235   13307 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22128-9390/.minikube/profiles/gvisor-751442/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1213 09:32:37.982204   13307 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22128-9390/.minikube/profiles/gvisor-751442/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1213 09:32:38.623744   13307 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22128-9390/.minikube/profiles/gvisor-751442/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
helpers_test.go:353: "busybox" [cdc04b54-532c-478b-8fb8-8c3741dfbb4a] Running
E1213 09:32:42.467401   13307 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22128-9390/.minikube/profiles/gvisor-751442/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
start_stop_delete_test.go:194: (dbg) TestStartStop/group/embed-certs/serial/DeployApp: integration-test=busybox healthy within 10.00532639s
start_stop_delete_test.go:194: (dbg) Run:  kubectl --context embed-certs-594077 exec busybox -- /bin/sh -c "ulimit -n"
--- PASS: TestStartStop/group/embed-certs/serial/DeployApp (10.45s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop (5.09s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop
start_stop_delete_test.go:285: (dbg) TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:353: "kubernetes-dashboard-8694d4445c-fzc2v" [fa9bfdda-89c0-4e5b-a19e-559eb5c9323d] Running
E1213 09:32:39.905166   13307 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22128-9390/.minikube/profiles/gvisor-751442/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
start_stop_delete_test.go:285: (dbg) TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 5.005257847s
start_stop_delete_test.go:289: (dbg) Run:  kubectl --context old-k8s-version-394980 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard
--- PASS: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop (5.09s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/VerifyKubernetesImages (0.45s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/VerifyKubernetesImages
start_stop_delete_test.go:302: (dbg) Run:  out/minikube-linux-amd64 -p old-k8s-version-394980 image list --format=json
start_stop_delete_test.go:302: Found non-minikube image: gcr.io/k8s-minikube/gvisor-addon:2
start_stop_delete_test.go:302: Found non-minikube image: gcr.io/k8s-minikube/busybox:1.28.4-glibc
--- PASS: TestStartStop/group/old-k8s-version/serial/VerifyKubernetesImages (0.45s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/Pause (3.49s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/Pause
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-amd64 pause -p old-k8s-version-394980 --alsologtostderr -v=1
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p old-k8s-version-394980 -n old-k8s-version-394980
start_stop_delete_test.go:309: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.APIServer}} -p old-k8s-version-394980 -n old-k8s-version-394980: exit status 2 (275.438217ms)

                                                
                                                
-- stdout --
	Paused

                                                
                                                
-- /stdout --
start_stop_delete_test.go:309: status error: exit status 2 (may be ok)
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Kubelet}} -p old-k8s-version-394980 -n old-k8s-version-394980
start_stop_delete_test.go:309: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Kubelet}} -p old-k8s-version-394980 -n old-k8s-version-394980: exit status 2 (265.110306ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:309: status error: exit status 2 (may be ok)
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-amd64 unpause -p old-k8s-version-394980 --alsologtostderr -v=1
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p old-k8s-version-394980 -n old-k8s-version-394980
E1213 09:32:47.589399   13307 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22128-9390/.minikube/profiles/gvisor-751442/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Kubelet}} -p old-k8s-version-394980 -n old-k8s-version-394980
--- PASS: TestStartStop/group/old-k8s-version/serial/Pause (3.49s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/UserAppExistsAfterStop (13.01s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/UserAppExistsAfterStop
start_stop_delete_test.go:272: (dbg) TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:353: "kubernetes-dashboard-b84665fb8-jvbrd" [489c0d44-4d05-4e62-b83b-726de7de7b17] Pending / Ready:ContainersNotReady (containers with unready status: [kubernetes-dashboard]) / ContainersReady:ContainersNotReady (containers with unready status: [kubernetes-dashboard])
helpers_test.go:353: "kubernetes-dashboard-b84665fb8-jvbrd" [489c0d44-4d05-4e62-b83b-726de7de7b17] Running
E1213 09:32:57.831154   13307 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22128-9390/.minikube/profiles/gvisor-751442/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
start_stop_delete_test.go:272: (dbg) TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 13.00568003s
--- PASS: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop (13.01s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/EnableAddonWhileActive (1.44s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/EnableAddonWhileActive
start_stop_delete_test.go:203: (dbg) Run:  out/minikube-linux-amd64 addons enable metrics-server -p embed-certs-594077 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain
start_stop_delete_test.go:203: (dbg) Done: out/minikube-linux-amd64 addons enable metrics-server -p embed-certs-594077 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain: (1.322846407s)
start_stop_delete_test.go:213: (dbg) Run:  kubectl --context embed-certs-594077 describe deploy/metrics-server -n kube-system
--- PASS: TestStartStop/group/embed-certs/serial/EnableAddonWhileActive (1.44s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/Stop (13.98s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/Stop
start_stop_delete_test.go:226: (dbg) Run:  out/minikube-linux-amd64 stop -p embed-certs-594077 --alsologtostderr -v=3
start_stop_delete_test.go:226: (dbg) Done: out/minikube-linux-amd64 stop -p embed-certs-594077 --alsologtostderr -v=3: (13.983877148s)
--- PASS: TestStartStop/group/embed-certs/serial/Stop (13.98s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/FirstStart (89.23s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/FirstStart
start_stop_delete_test.go:184: (dbg) Run:  out/minikube-linux-amd64 start -p default-k8s-diff-port-018953 --memory=3072 --alsologtostderr --wait=true --apiserver-port=8444 --driver=kvm2  --kubernetes-version=v1.34.2
start_stop_delete_test.go:184: (dbg) Done: out/minikube-linux-amd64 start -p default-k8s-diff-port-018953 --memory=3072 --alsologtostderr --wait=true --apiserver-port=8444 --driver=kvm2  --kubernetes-version=v1.34.2: (1m29.232253496s)
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/FirstStart (89.23s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/AddonExistsAfterStop (5.1s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/AddonExistsAfterStop
start_stop_delete_test.go:285: (dbg) TestStartStop/group/no-preload/serial/AddonExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:353: "kubernetes-dashboard-b84665fb8-jvbrd" [489c0d44-4d05-4e62-b83b-726de7de7b17] Running
start_stop_delete_test.go:285: (dbg) TestStartStop/group/no-preload/serial/AddonExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 5.006065329s
start_stop_delete_test.go:289: (dbg) Run:  kubectl --context no-preload-616969 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard
--- PASS: TestStartStop/group/no-preload/serial/AddonExistsAfterStop (5.10s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/EnableAddonAfterStop (0.19s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/EnableAddonAfterStop
start_stop_delete_test.go:237: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p embed-certs-594077 -n embed-certs-594077
start_stop_delete_test.go:237: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p embed-certs-594077 -n embed-certs-594077: exit status 7 (76.306767ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:237: status error: exit status 7 (may be ok)
start_stop_delete_test.go:244: (dbg) Run:  out/minikube-linux-amd64 addons enable dashboard -p embed-certs-594077 --images=MetricsScraper=registry.k8s.io/echoserver:1.4
--- PASS: TestStartStop/group/embed-certs/serial/EnableAddonAfterStop (0.19s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/SecondStart (56.42s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/SecondStart
start_stop_delete_test.go:254: (dbg) Run:  out/minikube-linux-amd64 start -p embed-certs-594077 --memory=3072 --alsologtostderr --wait=true --embed-certs --driver=kvm2  --kubernetes-version=v1.34.2
start_stop_delete_test.go:254: (dbg) Done: out/minikube-linux-amd64 start -p embed-certs-594077 --memory=3072 --alsologtostderr --wait=true --embed-certs --driver=kvm2  --kubernetes-version=v1.34.2: (55.992104892s)
start_stop_delete_test.go:260: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p embed-certs-594077 -n embed-certs-594077
--- PASS: TestStartStop/group/embed-certs/serial/SecondStart (56.42s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/VerifyKubernetesImages (0.25s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/VerifyKubernetesImages
start_stop_delete_test.go:302: (dbg) Run:  out/minikube-linux-amd64 -p no-preload-616969 image list --format=json
start_stop_delete_test.go:302: Found non-minikube image: gcr.io/k8s-minikube/gvisor-addon:2
start_stop_delete_test.go:302: Found non-minikube image: gcr.io/k8s-minikube/busybox:1.28.4-glibc
--- PASS: TestStartStop/group/no-preload/serial/VerifyKubernetesImages (0.25s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/Pause (3.24s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/Pause
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-amd64 pause -p no-preload-616969 --alsologtostderr -v=1
start_stop_delete_test.go:309: (dbg) Done: out/minikube-linux-amd64 pause -p no-preload-616969 --alsologtostderr -v=1: (1.02186652s)
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p no-preload-616969 -n no-preload-616969
start_stop_delete_test.go:309: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.APIServer}} -p no-preload-616969 -n no-preload-616969: exit status 2 (282.399525ms)

                                                
                                                
-- stdout --
	Paused

                                                
                                                
-- /stdout --
start_stop_delete_test.go:309: status error: exit status 2 (may be ok)
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Kubelet}} -p no-preload-616969 -n no-preload-616969
start_stop_delete_test.go:309: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Kubelet}} -p no-preload-616969 -n no-preload-616969: exit status 2 (290.929765ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:309: status error: exit status 2 (may be ok)
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-amd64 unpause -p no-preload-616969 --alsologtostderr -v=1
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p no-preload-616969 -n no-preload-616969
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Kubelet}} -p no-preload-616969 -n no-preload-616969
--- PASS: TestStartStop/group/no-preload/serial/Pause (3.24s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/Start (114.33s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-amd64 start -p auto-949855 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --driver=kvm2 
E1213 09:33:18.313212   13307 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22128-9390/.minikube/profiles/gvisor-751442/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
net_test.go:112: (dbg) Done: out/minikube-linux-amd64 start -p auto-949855 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --driver=kvm2 : (1m54.326051365s)
--- PASS: TestNetworkPlugins/group/auto/Start (114.33s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/DeployApp (0s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/DeployApp
--- PASS: TestStartStop/group/newest-cni/serial/DeployApp (0.00s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/EnableAddonWhileActive (1.21s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/EnableAddonWhileActive
start_stop_delete_test.go:203: (dbg) Run:  out/minikube-linux-amd64 addons enable metrics-server -p newest-cni-719997 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain
start_stop_delete_test.go:203: (dbg) Done: out/minikube-linux-amd64 addons enable metrics-server -p newest-cni-719997 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain: (1.211895833s)
start_stop_delete_test.go:209: WARNING: cni mode requires additional setup before pods can schedule :(
--- PASS: TestStartStop/group/newest-cni/serial/EnableAddonWhileActive (1.21s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/Stop (13.71s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/Stop
start_stop_delete_test.go:226: (dbg) Run:  out/minikube-linux-amd64 stop -p newest-cni-719997 --alsologtostderr -v=3
start_stop_delete_test.go:226: (dbg) Done: out/minikube-linux-amd64 stop -p newest-cni-719997 --alsologtostderr -v=3: (13.709429719s)
--- PASS: TestStartStop/group/newest-cni/serial/Stop (13.71s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/EnableAddonAfterStop (0.2s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/EnableAddonAfterStop
start_stop_delete_test.go:237: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p newest-cni-719997 -n newest-cni-719997
start_stop_delete_test.go:237: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p newest-cni-719997 -n newest-cni-719997: exit status 7 (84.131218ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:237: status error: exit status 7 (may be ok)
start_stop_delete_test.go:244: (dbg) Run:  out/minikube-linux-amd64 addons enable dashboard -p newest-cni-719997 --images=MetricsScraper=registry.k8s.io/echoserver:1.4
--- PASS: TestStartStop/group/newest-cni/serial/EnableAddonAfterStop (0.20s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/SecondStart (57.98s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/SecondStart
start_stop_delete_test.go:254: (dbg) Run:  out/minikube-linux-amd64 start -p newest-cni-719997 --memory=3072 --alsologtostderr --wait=apiserver,system_pods,default_sa --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=kvm2  --kubernetes-version=v1.35.0-beta.0
E1213 09:33:44.844196   13307 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22128-9390/.minikube/profiles/functional-888658/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
start_stop_delete_test.go:254: (dbg) Done: out/minikube-linux-amd64 start -p newest-cni-719997 --memory=3072 --alsologtostderr --wait=apiserver,system_pods,default_sa --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=kvm2  --kubernetes-version=v1.35.0-beta.0: (57.730296085s)
start_stop_delete_test.go:260: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p newest-cni-719997 -n newest-cni-719997
--- PASS: TestStartStop/group/newest-cni/serial/SecondStart (57.98s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop (12.01s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop
start_stop_delete_test.go:272: (dbg) TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:353: "kubernetes-dashboard-855c9754f9-5ckvx" [ffb1a3ff-2bff-4ba7-94f3-20b13948a60a] Pending / Ready:ContainersNotReady (containers with unready status: [kubernetes-dashboard]) / ContainersReady:ContainersNotReady (containers with unready status: [kubernetes-dashboard])
E1213 09:33:59.275073   13307 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22128-9390/.minikube/profiles/gvisor-751442/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1213 09:34:01.922971   13307 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22128-9390/.minikube/profiles/functional-016924/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
helpers_test.go:353: "kubernetes-dashboard-855c9754f9-5ckvx" [ffb1a3ff-2bff-4ba7-94f3-20b13948a60a] Running
start_stop_delete_test.go:272: (dbg) TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 12.004302419s
--- PASS: TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop (12.01s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/AddonExistsAfterStop (5.11s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/AddonExistsAfterStop
start_stop_delete_test.go:285: (dbg) TestStartStop/group/embed-certs/serial/AddonExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:353: "kubernetes-dashboard-855c9754f9-5ckvx" [ffb1a3ff-2bff-4ba7-94f3-20b13948a60a] Running
start_stop_delete_test.go:285: (dbg) TestStartStop/group/embed-certs/serial/AddonExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 5.005867348s
start_stop_delete_test.go:289: (dbg) Run:  kubectl --context embed-certs-594077 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard
--- PASS: TestStartStop/group/embed-certs/serial/AddonExistsAfterStop (5.11s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/VerifyKubernetesImages (0.24s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/VerifyKubernetesImages
start_stop_delete_test.go:302: (dbg) Run:  out/minikube-linux-amd64 -p embed-certs-594077 image list --format=json
start_stop_delete_test.go:302: Found non-minikube image: gcr.io/k8s-minikube/gvisor-addon:2
start_stop_delete_test.go:302: Found non-minikube image: gcr.io/k8s-minikube/busybox:1.28.4-glibc
--- PASS: TestStartStop/group/embed-certs/serial/VerifyKubernetesImages (0.24s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/DeployApp (10.44s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/DeployApp
start_stop_delete_test.go:194: (dbg) Run:  kubectl --context default-k8s-diff-port-018953 create -f testdata/busybox.yaml
start_stop_delete_test.go:194: (dbg) TestStartStop/group/default-k8s-diff-port/serial/DeployApp: waiting 8m0s for pods matching "integration-test=busybox" in namespace "default" ...
helpers_test.go:353: "busybox" [46b9d7b0-c9d8-4b68-a481-fe5d6570f93d] Pending
helpers_test.go:353: "busybox" [46b9d7b0-c9d8-4b68-a481-fe5d6570f93d] Pending / Ready:ContainersNotReady (containers with unready status: [busybox]) / ContainersReady:ContainersNotReady (containers with unready status: [busybox])
helpers_test.go:353: "busybox" [46b9d7b0-c9d8-4b68-a481-fe5d6570f93d] Running
start_stop_delete_test.go:194: (dbg) TestStartStop/group/default-k8s-diff-port/serial/DeployApp: integration-test=busybox healthy within 10.005373526s
start_stop_delete_test.go:194: (dbg) Run:  kubectl --context default-k8s-diff-port-018953 exec busybox -- /bin/sh -c "ulimit -n"
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/DeployApp (10.44s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/EnableAddonWhileActive (1.24s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/EnableAddonWhileActive
start_stop_delete_test.go:203: (dbg) Run:  out/minikube-linux-amd64 addons enable metrics-server -p default-k8s-diff-port-018953 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain
start_stop_delete_test.go:203: (dbg) Done: out/minikube-linux-amd64 addons enable metrics-server -p default-k8s-diff-port-018953 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain: (1.150547432s)
start_stop_delete_test.go:213: (dbg) Run:  kubectl --context default-k8s-diff-port-018953 describe deploy/metrics-server -n kube-system
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/EnableAddonWhileActive (1.24s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/Stop (13.1s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/Stop
start_stop_delete_test.go:226: (dbg) Run:  out/minikube-linux-amd64 stop -p default-k8s-diff-port-018953 --alsologtostderr -v=3
start_stop_delete_test.go:226: (dbg) Done: out/minikube-linux-amd64 stop -p default-k8s-diff-port-018953 --alsologtostderr -v=3: (13.097940804s)
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/Stop (13.10s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/UserAppExistsAfterStop (0s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/UserAppExistsAfterStop
start_stop_delete_test.go:271: WARNING: cni mode requires additional setup before pods can schedule :(
--- PASS: TestStartStop/group/newest-cni/serial/UserAppExistsAfterStop (0.00s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/AddonExistsAfterStop (0s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/AddonExistsAfterStop
start_stop_delete_test.go:282: WARNING: cni mode requires additional setup before pods can schedule :(
--- PASS: TestStartStop/group/newest-cni/serial/AddonExistsAfterStop (0.00s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/VerifyKubernetesImages (0.21s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/VerifyKubernetesImages
start_stop_delete_test.go:302: (dbg) Run:  out/minikube-linux-amd64 -p newest-cni-719997 image list --format=json
start_stop_delete_test.go:302: Found non-minikube image: gcr.io/k8s-minikube/gvisor-addon:2
--- PASS: TestStartStop/group/newest-cni/serial/VerifyKubernetesImages (0.21s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/Pause (2.74s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/Pause
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-amd64 pause -p newest-cni-719997 --alsologtostderr -v=1
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p newest-cni-719997 -n newest-cni-719997
start_stop_delete_test.go:309: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.APIServer}} -p newest-cni-719997 -n newest-cni-719997: exit status 2 (301.892539ms)

                                                
                                                
-- stdout --
	Paused

                                                
                                                
-- /stdout --
start_stop_delete_test.go:309: status error: exit status 2 (may be ok)
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Kubelet}} -p newest-cni-719997 -n newest-cni-719997
start_stop_delete_test.go:309: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Kubelet}} -p newest-cni-719997 -n newest-cni-719997: exit status 2 (231.616164ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:309: status error: exit status 2 (may be ok)
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-amd64 unpause -p newest-cni-719997 --alsologtostderr -v=1
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p newest-cni-719997 -n newest-cni-719997
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Kubelet}} -p newest-cni-719997 -n newest-cni-719997
--- PASS: TestStartStop/group/newest-cni/serial/Pause (2.74s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/Start (79.7s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-amd64 start -p kindnet-949855 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=kindnet --driver=kvm2 
net_test.go:112: (dbg) Done: out/minikube-linux-amd64 start -p kindnet-949855 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=kindnet --driver=kvm2 : (1m19.698657305s)
--- PASS: TestNetworkPlugins/group/kindnet/Start (79.70s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/EnableAddonAfterStop (0.15s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/EnableAddonAfterStop
start_stop_delete_test.go:237: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p default-k8s-diff-port-018953 -n default-k8s-diff-port-018953
start_stop_delete_test.go:237: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p default-k8s-diff-port-018953 -n default-k8s-diff-port-018953: exit status 7 (65.35297ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:237: status error: exit status 7 (may be ok)
start_stop_delete_test.go:244: (dbg) Run:  out/minikube-linux-amd64 addons enable dashboard -p default-k8s-diff-port-018953 --images=MetricsScraper=registry.k8s.io/echoserver:1.4
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/EnableAddonAfterStop (0.15s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/SecondStart (63.63s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/SecondStart
start_stop_delete_test.go:254: (dbg) Run:  out/minikube-linux-amd64 start -p default-k8s-diff-port-018953 --memory=3072 --alsologtostderr --wait=true --apiserver-port=8444 --driver=kvm2  --kubernetes-version=v1.34.2
start_stop_delete_test.go:254: (dbg) Done: out/minikube-linux-amd64 start -p default-k8s-diff-port-018953 --memory=3072 --alsologtostderr --wait=true --apiserver-port=8444 --driver=kvm2  --kubernetes-version=v1.34.2: (1m3.271335984s)
start_stop_delete_test.go:260: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p default-k8s-diff-port-018953 -n default-k8s-diff-port-018953
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/SecondStart (63.63s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/Start (123.68s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-amd64 start -p calico-949855 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=calico --driver=kvm2 
net_test.go:112: (dbg) Done: out/minikube-linux-amd64 start -p calico-949855 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=calico --driver=kvm2 : (2m3.679237962s)
--- PASS: TestNetworkPlugins/group/calico/Start (123.68s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/KubeletFlags (0.22s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-amd64 ssh -p auto-949855 "pgrep -a kubelet"
I1213 09:35:03.158744   13307 config.go:182] Loaded profile config "auto-949855": Driver=kvm2, ContainerRuntime=docker, KubernetesVersion=v1.34.2
--- PASS: TestNetworkPlugins/group/auto/KubeletFlags (0.22s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/NetCatPod (13.33s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context auto-949855 replace --force -f testdata/netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/auto/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:353: "netcat-cd4db9dbf-lsc8s" [4655f943-eb13-4a88-b0d1-a9a9cd6c4f8e] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
helpers_test.go:353: "netcat-cd4db9dbf-lsc8s" [4655f943-eb13-4a88-b0d1-a9a9cd6c4f8e] Running
net_test.go:163: (dbg) TestNetworkPlugins/group/auto/NetCatPod: app=netcat healthy within 13.00470872s
--- PASS: TestNetworkPlugins/group/auto/NetCatPod (13.33s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/DNS (0.26s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/DNS
net_test.go:175: (dbg) Run:  kubectl --context auto-949855 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/auto/DNS (0.26s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/Localhost (0.2s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/Localhost
net_test.go:194: (dbg) Run:  kubectl --context auto-949855 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/auto/Localhost (0.20s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/HairPin (0.18s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/HairPin
net_test.go:264: (dbg) Run:  kubectl --context auto-949855 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/auto/HairPin (0.18s)

                                                
                                    
x
+
TestNetworkPlugins/group/custom-flannel/Start (76.52s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/custom-flannel/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-amd64 start -p custom-flannel-949855 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=testdata/kube-flannel.yaml --driver=kvm2 
net_test.go:112: (dbg) Done: out/minikube-linux-amd64 start -p custom-flannel-949855 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=testdata/kube-flannel.yaml --driver=kvm2 : (1m16.516677727s)
--- PASS: TestNetworkPlugins/group/custom-flannel/Start (76.52s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop (13.01s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop
start_stop_delete_test.go:272: (dbg) TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:353: "kubernetes-dashboard-855c9754f9-p2brd" [802422c6-7fb7-4956-ad08-24d3ab961d9b] Pending / Ready:ContainersNotReady (containers with unready status: [kubernetes-dashboard]) / ContainersReady:ContainersNotReady (containers with unready status: [kubernetes-dashboard])
helpers_test.go:353: "kubernetes-dashboard-855c9754f9-p2brd" [802422c6-7fb7-4956-ad08-24d3ab961d9b] Running
start_stop_delete_test.go:272: (dbg) TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 13.005332103s
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop (13.01s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/ControllerPod (6.01s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/ControllerPod
net_test.go:120: (dbg) TestNetworkPlugins/group/kindnet/ControllerPod: waiting 10m0s for pods matching "app=kindnet" in namespace "kube-system" ...
helpers_test.go:353: "kindnet-w4pjr" [236afd39-6b53-4033-9c5b-d92f593f1cb1] Running
net_test.go:120: (dbg) TestNetworkPlugins/group/kindnet/ControllerPod: app=kindnet healthy within 6.005957518s
--- PASS: TestNetworkPlugins/group/kindnet/ControllerPod (6.01s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop (5.11s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop
start_stop_delete_test.go:285: (dbg) TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:353: "kubernetes-dashboard-855c9754f9-p2brd" [802422c6-7fb7-4956-ad08-24d3ab961d9b] Running
start_stop_delete_test.go:285: (dbg) TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 5.00592096s
start_stop_delete_test.go:289: (dbg) Run:  kubectl --context default-k8s-diff-port-018953 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop (5.11s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/KubeletFlags (0.21s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-amd64 ssh -p kindnet-949855 "pgrep -a kubelet"
I1213 09:36:03.513028   13307 config.go:182] Loaded profile config "kindnet-949855": Driver=kvm2, ContainerRuntime=docker, KubernetesVersion=v1.34.2
--- PASS: TestNetworkPlugins/group/kindnet/KubeletFlags (0.21s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/NetCatPod (12.3s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context kindnet-949855 replace --force -f testdata/netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/kindnet/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:353: "netcat-cd4db9dbf-rs4xp" [ad19513e-9a20-4cff-8ff1-cb726e409810] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
helpers_test.go:353: "netcat-cd4db9dbf-rs4xp" [ad19513e-9a20-4cff-8ff1-cb726e409810] Running
E1213 09:36:12.155620   13307 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22128-9390/.minikube/profiles/old-k8s-version-394980/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
net_test.go:163: (dbg) TestNetworkPlugins/group/kindnet/NetCatPod: app=netcat healthy within 12.005750021s
--- PASS: TestNetworkPlugins/group/kindnet/NetCatPod (12.30s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/VerifyKubernetesImages (0.23s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/VerifyKubernetesImages
start_stop_delete_test.go:302: (dbg) Run:  out/minikube-linux-amd64 -p default-k8s-diff-port-018953 image list --format=json
start_stop_delete_test.go:302: Found non-minikube image: gcr.io/k8s-minikube/gvisor-addon:2
start_stop_delete_test.go:302: Found non-minikube image: gcr.io/k8s-minikube/busybox:1.28.4-glibc
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/VerifyKubernetesImages (0.23s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/DNS (0.23s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/DNS
net_test.go:175: (dbg) Run:  kubectl --context kindnet-949855 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/kindnet/DNS (0.23s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/Localhost (0.16s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/Localhost
net_test.go:194: (dbg) Run:  kubectl --context kindnet-949855 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/kindnet/Localhost (0.16s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/HairPin (0.16s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/HairPin
net_test.go:264: (dbg) Run:  kubectl --context kindnet-949855 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/kindnet/HairPin (0.16s)

                                                
                                    
x
+
TestNetworkPlugins/group/false/Start (94.24s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/false/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-amd64 start -p false-949855 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=false --driver=kvm2 
E1213 09:36:34.524460   13307 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22128-9390/.minikube/profiles/no-preload-616969/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1213 09:36:35.363261   13307 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22128-9390/.minikube/profiles/skaffold-452054/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
net_test.go:112: (dbg) Done: out/minikube-linux-amd64 start -p false-949855 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=false --driver=kvm2 : (1m34.234957376s)
--- PASS: TestNetworkPlugins/group/false/Start (94.24s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/Start (98.81s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-amd64 start -p enable-default-cni-949855 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --enable-default-cni=true --driver=kvm2 
net_test.go:112: (dbg) Done: out/minikube-linux-amd64 start -p enable-default-cni-949855 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --enable-default-cni=true --driver=kvm2 : (1m38.813786507s)
--- PASS: TestNetworkPlugins/group/enable-default-cni/Start (98.81s)

                                                
                                    
x
+
TestNetworkPlugins/group/custom-flannel/KubeletFlags (0.18s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/custom-flannel/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-amd64 ssh -p custom-flannel-949855 "pgrep -a kubelet"
I1213 09:36:50.795408   13307 config.go:182] Loaded profile config "custom-flannel-949855": Driver=kvm2, ContainerRuntime=docker, KubernetesVersion=v1.34.2
--- PASS: TestNetworkPlugins/group/custom-flannel/KubeletFlags (0.18s)

                                                
                                    
x
+
TestNetworkPlugins/group/custom-flannel/NetCatPod (11.28s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/custom-flannel/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context custom-flannel-949855 replace --force -f testdata/netcat-deployment.yaml
I1213 09:36:51.060756   13307 kapi.go:136] Waiting for deployment netcat to stabilize, generation 1 observed generation 1 spec.replicas 1 status.replicas 0
net_test.go:163: (dbg) TestNetworkPlugins/group/custom-flannel/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:353: "netcat-cd4db9dbf-qkf46" [859245b8-4082-4149-ae85-97e10c7c5d95] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
helpers_test.go:353: "netcat-cd4db9dbf-qkf46" [859245b8-4082-4149-ae85-97e10c7c5d95] Running
net_test.go:163: (dbg) TestNetworkPlugins/group/custom-flannel/NetCatPod: app=netcat healthy within 11.005183595s
--- PASS: TestNetworkPlugins/group/custom-flannel/NetCatPod (11.28s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/ControllerPod (6.01s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/ControllerPod
net_test.go:120: (dbg) TestNetworkPlugins/group/calico/ControllerPod: waiting 10m0s for pods matching "k8s-app=calico-node" in namespace "kube-system" ...
helpers_test.go:353: "calico-node-ktf92" [3135da63-b9e4-4382-9209-e980d38f7768] Running / Ready:ContainersNotReady (containers with unready status: [calico-node]) / ContainersReady:ContainersNotReady (containers with unready status: [calico-node])
helpers_test.go:353: "calico-node-ktf92" [3135da63-b9e4-4382-9209-e980d38f7768] Running
net_test.go:120: (dbg) TestNetworkPlugins/group/calico/ControllerPod: k8s-app=calico-node healthy within 6.004527822s
--- PASS: TestNetworkPlugins/group/calico/ControllerPod (6.01s)

                                                
                                    
x
+
TestNetworkPlugins/group/custom-flannel/DNS (0.21s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/custom-flannel/DNS
net_test.go:175: (dbg) Run:  kubectl --context custom-flannel-949855 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/custom-flannel/DNS (0.21s)

                                                
                                    
x
+
TestNetworkPlugins/group/custom-flannel/Localhost (0.18s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/custom-flannel/Localhost
net_test.go:194: (dbg) Run:  kubectl --context custom-flannel-949855 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/custom-flannel/Localhost (0.18s)

                                                
                                    
x
+
TestNetworkPlugins/group/custom-flannel/HairPin (0.17s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/custom-flannel/HairPin
net_test.go:264: (dbg) Run:  kubectl --context custom-flannel-949855 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/custom-flannel/HairPin (0.17s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/KubeletFlags (0.22s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-amd64 ssh -p calico-949855 "pgrep -a kubelet"
I1213 09:37:06.614322   13307 config.go:182] Loaded profile config "calico-949855": Driver=kvm2, ContainerRuntime=docker, KubernetesVersion=v1.34.2
--- PASS: TestNetworkPlugins/group/calico/KubeletFlags (0.22s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/NetCatPod (13.37s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context calico-949855 replace --force -f testdata/netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/calico/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:353: "netcat-cd4db9dbf-rqz27" [b7677d25-7444-4bdf-a97d-deea0dbd8240] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
helpers_test.go:353: "netcat-cd4db9dbf-rqz27" [b7677d25-7444-4bdf-a97d-deea0dbd8240] Running
net_test.go:163: (dbg) TestNetworkPlugins/group/calico/NetCatPod: app=netcat healthy within 13.010075125s
--- PASS: TestNetworkPlugins/group/calico/NetCatPod (13.37s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/Start (73.97s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-amd64 start -p flannel-949855 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=flannel --driver=kvm2 
net_test.go:112: (dbg) Done: out/minikube-linux-amd64 start -p flannel-949855 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=flannel --driver=kvm2 : (1m13.974435879s)
--- PASS: TestNetworkPlugins/group/flannel/Start (73.97s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/DNS (0.4s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/DNS
net_test.go:175: (dbg) Run:  kubectl --context calico-949855 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/calico/DNS (0.40s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/Localhost (0.19s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/Localhost
net_test.go:194: (dbg) Run:  kubectl --context calico-949855 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/calico/Localhost (0.19s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/HairPin (0.23s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/HairPin
net_test.go:264: (dbg) Run:  kubectl --context calico-949855 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/calico/HairPin (0.23s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/Start (102.93s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-amd64 start -p bridge-949855 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=bridge --driver=kvm2 
E1213 09:37:46.209609   13307 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22128-9390/.minikube/profiles/no-preload-616969/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1213 09:38:05.039057   13307 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22128-9390/.minikube/profiles/gvisor-751442/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
net_test.go:112: (dbg) Done: out/minikube-linux-amd64 start -p bridge-949855 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=bridge --driver=kvm2 : (1m42.925343875s)
--- PASS: TestNetworkPlugins/group/bridge/Start (102.93s)

                                                
                                    
x
+
TestNetworkPlugins/group/false/KubeletFlags (0.2s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/false/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-amd64 ssh -p false-949855 "pgrep -a kubelet"
I1213 09:38:07.240077   13307 config.go:182] Loaded profile config "false-949855": Driver=kvm2, ContainerRuntime=docker, KubernetesVersion=v1.34.2
--- PASS: TestNetworkPlugins/group/false/KubeletFlags (0.20s)

                                                
                                    
x
+
TestNetworkPlugins/group/false/NetCatPod (13.31s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/false/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context false-949855 replace --force -f testdata/netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/false/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:353: "netcat-cd4db9dbf-8xwsm" [b9e99e97-202e-409b-984d-52a6dce04d24] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
helpers_test.go:353: "netcat-cd4db9dbf-8xwsm" [b9e99e97-202e-409b-984d-52a6dce04d24] Running
net_test.go:163: (dbg) TestNetworkPlugins/group/false/NetCatPod: app=netcat healthy within 13.005811168s
--- PASS: TestNetworkPlugins/group/false/NetCatPod (13.31s)

                                                
                                    
x
+
TestNetworkPlugins/group/false/DNS (0.21s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/false/DNS
net_test.go:175: (dbg) Run:  kubectl --context false-949855 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/false/DNS (0.21s)

                                                
                                    
x
+
TestNetworkPlugins/group/false/Localhost (0.15s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/false/Localhost
net_test.go:194: (dbg) Run:  kubectl --context false-949855 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/false/Localhost (0.15s)

                                                
                                    
x
+
TestNetworkPlugins/group/false/HairPin (0.15s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/false/HairPin
net_test.go:264: (dbg) Run:  kubectl --context false-949855 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/false/HairPin (0.15s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/KubeletFlags (0.23s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-amd64 ssh -p enable-default-cni-949855 "pgrep -a kubelet"
I1213 09:38:27.755648   13307 config.go:182] Loaded profile config "enable-default-cni-949855": Driver=kvm2, ContainerRuntime=docker, KubernetesVersion=v1.34.2
--- PASS: TestNetworkPlugins/group/enable-default-cni/KubeletFlags (0.23s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/NetCatPod (12.33s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context enable-default-cni-949855 replace --force -f testdata/netcat-deployment.yaml
E1213 09:38:27.913587   13307 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22128-9390/.minikube/profiles/functional-888658/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
net_test.go:163: (dbg) TestNetworkPlugins/group/enable-default-cni/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:353: "netcat-cd4db9dbf-fvj5m" [9de456d5-eaf1-495c-a634-11a343fee3e3] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
helpers_test.go:353: "netcat-cd4db9dbf-fvj5m" [9de456d5-eaf1-495c-a634-11a343fee3e3] Running
net_test.go:163: (dbg) TestNetworkPlugins/group/enable-default-cni/NetCatPod: app=netcat healthy within 12.009240124s
--- PASS: TestNetworkPlugins/group/enable-default-cni/NetCatPod (12.33s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/ControllerPod (6.01s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/ControllerPod
net_test.go:120: (dbg) TestNetworkPlugins/group/flannel/ControllerPod: waiting 10m0s for pods matching "app=flannel" in namespace "kube-flannel" ...
helpers_test.go:353: "kube-flannel-ds-svl5w" [6e160779-0047-4a03-82fb-10b38aae77cb] Running
net_test.go:120: (dbg) TestNetworkPlugins/group/flannel/ControllerPod: app=flannel healthy within 6.00552001s
--- PASS: TestNetworkPlugins/group/flannel/ControllerPod (6.01s)

                                                
                                    
x
+
TestNetworkPlugins/group/kubenet/Start (89.17s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kubenet/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-amd64 start -p kubenet-949855 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --network-plugin=kubenet --driver=kvm2 
net_test.go:112: (dbg) Done: out/minikube-linux-amd64 start -p kubenet-949855 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --network-plugin=kubenet --driver=kvm2 : (1m29.170342976s)
--- PASS: TestNetworkPlugins/group/kubenet/Start (89.17s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/KubeletFlags (0.23s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-amd64 ssh -p flannel-949855 "pgrep -a kubelet"
I1213 09:38:39.908716   13307 config.go:182] Loaded profile config "flannel-949855": Driver=kvm2, ContainerRuntime=docker, KubernetesVersion=v1.34.2
--- PASS: TestNetworkPlugins/group/flannel/KubeletFlags (0.23s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/NetCatPod (12.31s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context flannel-949855 replace --force -f testdata/netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/flannel/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:353: "netcat-cd4db9dbf-tsbv6" [9161f26e-b13c-4919-b462-d8c5d58441a7] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
helpers_test.go:353: "netcat-cd4db9dbf-tsbv6" [9161f26e-b13c-4919-b462-d8c5d58441a7] Running
E1213 09:38:50.884998   13307 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22128-9390/.minikube/profiles/old-k8s-version-394980/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
net_test.go:163: (dbg) TestNetworkPlugins/group/flannel/NetCatPod: app=netcat healthy within 12.006157163s
--- PASS: TestNetworkPlugins/group/flannel/NetCatPod (12.31s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/DNS (0.26s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/DNS
net_test.go:175: (dbg) Run:  kubectl --context enable-default-cni-949855 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/enable-default-cni/DNS (0.26s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/Localhost (0.23s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/Localhost
net_test.go:194: (dbg) Run:  kubectl --context enable-default-cni-949855 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/enable-default-cni/Localhost (0.23s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/HairPin (0.31s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/HairPin
net_test.go:264: (dbg) Run:  kubectl --context enable-default-cni-949855 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/enable-default-cni/HairPin (0.31s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/DNS (0.22s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/DNS
net_test.go:175: (dbg) Run:  kubectl --context flannel-949855 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/flannel/DNS (0.22s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/Localhost (0.17s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/Localhost
net_test.go:194: (dbg) Run:  kubectl --context flannel-949855 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/flannel/Localhost (0.17s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/HairPin (0.17s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/HairPin
net_test.go:264: (dbg) Run:  kubectl --context flannel-949855 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/flannel/HairPin (0.17s)

                                                
                                    
x
+
TestISOImage/PersistentMounts//data (0.2s)

                                                
                                                
=== RUN   TestISOImage/PersistentMounts//data
=== PAUSE TestISOImage/PersistentMounts//data

                                                
                                                

                                                
                                                
=== CONT  TestISOImage/PersistentMounts//data
iso_test.go:97: (dbg) Run:  out/minikube-linux-amd64 -p guest-566372 ssh "df -t ext4 /data | grep /data"
--- PASS: TestISOImage/PersistentMounts//data (0.20s)

                                                
                                    
x
+
TestISOImage/PersistentMounts//var/lib/docker (0.19s)

                                                
                                                
=== RUN   TestISOImage/PersistentMounts//var/lib/docker
=== PAUSE TestISOImage/PersistentMounts//var/lib/docker

                                                
                                                

                                                
                                                
=== CONT  TestISOImage/PersistentMounts//var/lib/docker
iso_test.go:97: (dbg) Run:  out/minikube-linux-amd64 -p guest-566372 ssh "df -t ext4 /var/lib/docker | grep /var/lib/docker"
--- PASS: TestISOImage/PersistentMounts//var/lib/docker (0.19s)

                                                
                                    
x
+
TestISOImage/PersistentMounts//var/lib/cni (0.2s)

                                                
                                                
=== RUN   TestISOImage/PersistentMounts//var/lib/cni
=== PAUSE TestISOImage/PersistentMounts//var/lib/cni

                                                
                                                

                                                
                                                
=== CONT  TestISOImage/PersistentMounts//var/lib/cni
iso_test.go:97: (dbg) Run:  out/minikube-linux-amd64 -p guest-566372 ssh "df -t ext4 /var/lib/cni | grep /var/lib/cni"
--- PASS: TestISOImage/PersistentMounts//var/lib/cni (0.20s)

                                                
                                    
x
+
TestISOImage/PersistentMounts//var/lib/kubelet (0.19s)

                                                
                                                
=== RUN   TestISOImage/PersistentMounts//var/lib/kubelet
=== PAUSE TestISOImage/PersistentMounts//var/lib/kubelet

                                                
                                                

                                                
                                                
=== CONT  TestISOImage/PersistentMounts//var/lib/kubelet
iso_test.go:97: (dbg) Run:  out/minikube-linux-amd64 -p guest-566372 ssh "df -t ext4 /var/lib/kubelet | grep /var/lib/kubelet"
--- PASS: TestISOImage/PersistentMounts//var/lib/kubelet (0.19s)

                                                
                                    
x
+
TestISOImage/PersistentMounts//var/lib/minikube (0.19s)

                                                
                                                
=== RUN   TestISOImage/PersistentMounts//var/lib/minikube
=== PAUSE TestISOImage/PersistentMounts//var/lib/minikube

                                                
                                                

                                                
                                                
=== CONT  TestISOImage/PersistentMounts//var/lib/minikube
iso_test.go:97: (dbg) Run:  out/minikube-linux-amd64 -p guest-566372 ssh "df -t ext4 /var/lib/minikube | grep /var/lib/minikube"
--- PASS: TestISOImage/PersistentMounts//var/lib/minikube (0.19s)

                                                
                                    
x
+
TestISOImage/PersistentMounts//var/lib/toolbox (0.19s)

                                                
                                                
=== RUN   TestISOImage/PersistentMounts//var/lib/toolbox
=== PAUSE TestISOImage/PersistentMounts//var/lib/toolbox

                                                
                                                

                                                
                                                
=== CONT  TestISOImage/PersistentMounts//var/lib/toolbox
iso_test.go:97: (dbg) Run:  out/minikube-linux-amd64 -p guest-566372 ssh "df -t ext4 /var/lib/toolbox | grep /var/lib/toolbox"
--- PASS: TestISOImage/PersistentMounts//var/lib/toolbox (0.19s)

                                                
                                    
x
+
TestISOImage/PersistentMounts//var/lib/boot2docker (0.21s)

                                                
                                                
=== RUN   TestISOImage/PersistentMounts//var/lib/boot2docker
=== PAUSE TestISOImage/PersistentMounts//var/lib/boot2docker

                                                
                                                

                                                
                                                
=== CONT  TestISOImage/PersistentMounts//var/lib/boot2docker
iso_test.go:97: (dbg) Run:  out/minikube-linux-amd64 -p guest-566372 ssh "df -t ext4 /var/lib/boot2docker | grep /var/lib/boot2docker"
--- PASS: TestISOImage/PersistentMounts//var/lib/boot2docker (0.21s)

                                                
                                    
x
+
TestISOImage/VersionJSON (0.19s)

                                                
                                                
=== RUN   TestISOImage/VersionJSON
iso_test.go:106: (dbg) Run:  out/minikube-linux-amd64 -p guest-566372 ssh "cat /version.json"
iso_test.go:116: Successfully parsed /version.json:
iso_test.go:118:   iso_version: v1.37.0-1765481609-22101
iso_test.go:118:   kicbase_version: v0.0.48-1765275396-22083
iso_test.go:118:   minikube_version: v1.37.0
iso_test.go:118:   commit: 28bc9824e3c85d2e3519912c2810d5729ab9ce8c
--- PASS: TestISOImage/VersionJSON (0.19s)

                                                
                                    
x
+
TestISOImage/eBPFSupport (0.19s)

                                                
                                                
=== RUN   TestISOImage/eBPFSupport
iso_test.go:125: (dbg) Run:  out/minikube-linux-amd64 -p guest-566372 ssh "test -f /sys/kernel/btf/vmlinux && echo 'OK' || echo 'NOT FOUND'"
--- PASS: TestISOImage/eBPFSupport (0.19s)
E1213 09:39:08.132057   13307 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22128-9390/.minikube/profiles/no-preload-616969/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/KubeletFlags (0.2s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-amd64 ssh -p bridge-949855 "pgrep -a kubelet"
E1213 09:39:21.946840   13307 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22128-9390/.minikube/profiles/default-k8s-diff-port-018953/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
I1213 09:39:22.126007   13307 config.go:182] Loaded profile config "bridge-949855": Driver=kvm2, ContainerRuntime=docker, KubernetesVersion=v1.34.2
--- PASS: TestNetworkPlugins/group/bridge/KubeletFlags (0.20s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/NetCatPod (11.26s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context bridge-949855 replace --force -f testdata/netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/bridge/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:353: "netcat-cd4db9dbf-p79xp" [18fa876c-9638-44be-9e47-a093f0246f54] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
E1213 09:39:24.508257   13307 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22128-9390/.minikube/profiles/default-k8s-diff-port-018953/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
helpers_test.go:353: "netcat-cd4db9dbf-p79xp" [18fa876c-9638-44be-9e47-a093f0246f54] Running
E1213 09:39:29.630502   13307 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22128-9390/.minikube/profiles/default-k8s-diff-port-018953/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
net_test.go:163: (dbg) TestNetworkPlugins/group/bridge/NetCatPod: app=netcat healthy within 11.00445011s
--- PASS: TestNetworkPlugins/group/bridge/NetCatPod (11.26s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/DNS (0.23s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/DNS
net_test.go:175: (dbg) Run:  kubectl --context bridge-949855 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/bridge/DNS (0.23s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/Localhost (0.16s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/Localhost
net_test.go:194: (dbg) Run:  kubectl --context bridge-949855 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/bridge/Localhost (0.16s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/HairPin (0.15s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/HairPin
net_test.go:264: (dbg) Run:  kubectl --context bridge-949855 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/bridge/HairPin (0.15s)

                                                
                                    
x
+
TestNetworkPlugins/group/kubenet/KubeletFlags (0.19s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kubenet/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-amd64 ssh -p kubenet-949855 "pgrep -a kubelet"
I1213 09:40:06.645953   13307 config.go:182] Loaded profile config "kubenet-949855": Driver=kvm2, ContainerRuntime=docker, KubernetesVersion=v1.34.2
--- PASS: TestNetworkPlugins/group/kubenet/KubeletFlags (0.19s)

                                                
                                    
x
+
TestNetworkPlugins/group/kubenet/NetCatPod (11.24s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kubenet/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context kubenet-949855 replace --force -f testdata/netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/kubenet/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:353: "netcat-cd4db9dbf-xzzj6" [5d6cea7e-b088-43cd-abfc-81fb3b73193a] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
E1213 09:40:08.597890   13307 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22128-9390/.minikube/profiles/auto-949855/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
helpers_test.go:353: "netcat-cd4db9dbf-xzzj6" [5d6cea7e-b088-43cd-abfc-81fb3b73193a] Running
E1213 09:40:13.719777   13307 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22128-9390/.minikube/profiles/auto-949855/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
net_test.go:163: (dbg) TestNetworkPlugins/group/kubenet/NetCatPod: app=netcat healthy within 11.00456625s
--- PASS: TestNetworkPlugins/group/kubenet/NetCatPod (11.24s)

                                                
                                    
x
+
TestNetworkPlugins/group/kubenet/DNS (0.17s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kubenet/DNS
net_test.go:175: (dbg) Run:  kubectl --context kubenet-949855 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/kubenet/DNS (0.17s)

                                                
                                    
x
+
TestNetworkPlugins/group/kubenet/Localhost (0.15s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kubenet/Localhost
net_test.go:194: (dbg) Run:  kubectl --context kubenet-949855 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/kubenet/Localhost (0.15s)

                                                
                                    
x
+
TestNetworkPlugins/group/kubenet/HairPin (0.15s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kubenet/HairPin
net_test.go:264: (dbg) Run:  kubectl --context kubenet-949855 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/kubenet/HairPin (0.15s)

                                                
                                    

Test skip (45/452)

Order skiped test Duration
5 TestDownloadOnly/v1.28.0/cached-images 0
6 TestDownloadOnly/v1.28.0/binaries 0
7 TestDownloadOnly/v1.28.0/kubectl 0
14 TestDownloadOnly/v1.34.2/cached-images 0
15 TestDownloadOnly/v1.34.2/binaries 0
16 TestDownloadOnly/v1.34.2/kubectl 0
23 TestDownloadOnly/v1.35.0-beta.0/cached-images 0
24 TestDownloadOnly/v1.35.0-beta.0/binaries 0
25 TestDownloadOnly/v1.35.0-beta.0/kubectl 0
29 TestDownloadOnlyKic 0
42 TestAddons/serial/GCPAuth/RealCredentials 0
49 TestAddons/parallel/Olm 0
56 TestAddons/parallel/AmdGpuDevicePlugin 0
63 TestDockerEnvContainerd 0
64 TestHyperKitDriverInstallOrUpdate 0
65 TestHyperkitDriverSkipUpgrade 0
117 TestFunctional/parallel/PodmanEnv 0
131 TestFunctional/parallel/TunnelCmd/serial/RunSecondTunnel 0.01
132 TestFunctional/parallel/TunnelCmd/serial/StartTunnel 0.01
133 TestFunctional/parallel/TunnelCmd/serial/WaitService 0.01
134 TestFunctional/parallel/TunnelCmd/serial/AccessDirect 0.01
135 TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDig 0.01
136 TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDscacheutil 0.01
137 TestFunctional/parallel/TunnelCmd/serial/AccessThroughDNS 0.01
138 TestFunctional/parallel/TunnelCmd/serial/DeleteTunnel 0.01
211 TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PodmanEnv 0
218 TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/TunnelCmd/serial/RunSecondTunnel 0.01
219 TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/TunnelCmd/serial/StartTunnel 0.01
220 TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/TunnelCmd/serial/WaitService 0.01
221 TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/TunnelCmd/serial/AccessDirect 0.01
222 TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/TunnelCmd/serial/DNSResolutionByDig 0.01
223 TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/TunnelCmd/serial/DNSResolutionByDscacheutil 0.01
224 TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/TunnelCmd/serial/AccessThroughDNS 0.01
225 TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/TunnelCmd/serial/DeleteTunnel 0.01
289 TestImageBuild/serial/validateImageBuildWithBuildEnv 0
317 TestKicCustomNetwork 0
318 TestKicExistingNetwork 0
319 TestKicCustomSubnet 0
320 TestKicStaticIP 0
352 TestChangeNoneUser 0
355 TestScheduledStopWindows 0
359 TestInsufficientStorage 0
363 TestMissingContainerUpgrade 0
379 TestStartStop/group/disable-driver-mounts 0.24
385 TestNetworkPlugins/group/cilium 4.14
x
+
TestDownloadOnly/v1.28.0/cached-images (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.28.0/cached-images
aaa_download_only_test.go:128: Preload exists, images won't be cached
--- SKIP: TestDownloadOnly/v1.28.0/cached-images (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.28.0/binaries (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.28.0/binaries
aaa_download_only_test.go:150: Preload exists, binaries are present within.
--- SKIP: TestDownloadOnly/v1.28.0/binaries (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.28.0/kubectl (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.28.0/kubectl
aaa_download_only_test.go:166: Test for darwin and windows
--- SKIP: TestDownloadOnly/v1.28.0/kubectl (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.34.2/cached-images (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.34.2/cached-images
aaa_download_only_test.go:128: Preload exists, images won't be cached
--- SKIP: TestDownloadOnly/v1.34.2/cached-images (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.34.2/binaries (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.34.2/binaries
aaa_download_only_test.go:150: Preload exists, binaries are present within.
--- SKIP: TestDownloadOnly/v1.34.2/binaries (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.34.2/kubectl (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.34.2/kubectl
aaa_download_only_test.go:166: Test for darwin and windows
--- SKIP: TestDownloadOnly/v1.34.2/kubectl (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.35.0-beta.0/cached-images (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.35.0-beta.0/cached-images
aaa_download_only_test.go:128: Preload exists, images won't be cached
--- SKIP: TestDownloadOnly/v1.35.0-beta.0/cached-images (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.35.0-beta.0/binaries (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.35.0-beta.0/binaries
aaa_download_only_test.go:150: Preload exists, binaries are present within.
--- SKIP: TestDownloadOnly/v1.35.0-beta.0/binaries (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.35.0-beta.0/kubectl (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.35.0-beta.0/kubectl
aaa_download_only_test.go:166: Test for darwin and windows
--- SKIP: TestDownloadOnly/v1.35.0-beta.0/kubectl (0.00s)

                                                
                                    
x
+
TestDownloadOnlyKic (0s)

                                                
                                                
=== RUN   TestDownloadOnlyKic
aaa_download_only_test.go:219: skipping, only for docker or podman driver
--- SKIP: TestDownloadOnlyKic (0.00s)

                                                
                                    
x
+
TestAddons/serial/GCPAuth/RealCredentials (0s)

                                                
                                                
=== RUN   TestAddons/serial/GCPAuth/RealCredentials
addons_test.go:761: This test requires a GCE instance (excluding Cloud Shell) with a container based driver
--- SKIP: TestAddons/serial/GCPAuth/RealCredentials (0.00s)

                                                
                                    
x
+
TestAddons/parallel/Olm (0s)

                                                
                                                
=== RUN   TestAddons/parallel/Olm
=== PAUSE TestAddons/parallel/Olm

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/Olm
addons_test.go:485: Skipping OLM addon test until https://github.com/operator-framework/operator-lifecycle-manager/issues/2534 is resolved
--- SKIP: TestAddons/parallel/Olm (0.00s)

                                                
                                    
x
+
TestAddons/parallel/AmdGpuDevicePlugin (0s)

                                                
                                                
=== RUN   TestAddons/parallel/AmdGpuDevicePlugin
=== PAUSE TestAddons/parallel/AmdGpuDevicePlugin

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/AmdGpuDevicePlugin
addons_test.go:1035: skip amd gpu test on all but docker driver and amd64 platform
--- SKIP: TestAddons/parallel/AmdGpuDevicePlugin (0.00s)

                                                
                                    
x
+
TestDockerEnvContainerd (0s)

                                                
                                                
=== RUN   TestDockerEnvContainerd
docker_test.go:170: running with docker false linux amd64
docker_test.go:172: skipping: TestDockerEnvContainerd can only be run with the containerd runtime on Docker driver
--- SKIP: TestDockerEnvContainerd (0.00s)

                                                
                                    
x
+
TestHyperKitDriverInstallOrUpdate (0s)

                                                
                                                
=== RUN   TestHyperKitDriverInstallOrUpdate
driver_install_or_update_test.go:37: Skip if not darwin.
--- SKIP: TestHyperKitDriverInstallOrUpdate (0.00s)

                                                
                                    
x
+
TestHyperkitDriverSkipUpgrade (0s)

                                                
                                                
=== RUN   TestHyperkitDriverSkipUpgrade
driver_install_or_update_test.go:101: Skip if not darwin.
--- SKIP: TestHyperkitDriverSkipUpgrade (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/PodmanEnv (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/PodmanEnv
=== PAUSE TestFunctional/parallel/PodmanEnv

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/PodmanEnv
functional_test.go:565: only validate podman env with docker container runtime, currently testing docker
--- SKIP: TestFunctional/parallel/PodmanEnv (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/RunSecondTunnel (0.01s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/RunSecondTunnel
functional_test_tunnel_test.go:90: password required to execute 'route', skipping testTunnel: exit status 1
--- SKIP: TestFunctional/parallel/TunnelCmd/serial/RunSecondTunnel (0.01s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/StartTunnel (0.01s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/StartTunnel
functional_test_tunnel_test.go:90: password required to execute 'route', skipping testTunnel: exit status 1
--- SKIP: TestFunctional/parallel/TunnelCmd/serial/StartTunnel (0.01s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/WaitService (0.01s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/WaitService
functional_test_tunnel_test.go:90: password required to execute 'route', skipping testTunnel: exit status 1
--- SKIP: TestFunctional/parallel/TunnelCmd/serial/WaitService (0.01s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/AccessDirect (0.01s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/AccessDirect
functional_test_tunnel_test.go:90: password required to execute 'route', skipping testTunnel: exit status 1
--- SKIP: TestFunctional/parallel/TunnelCmd/serial/AccessDirect (0.01s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDig (0.01s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDig
functional_test_tunnel_test.go:90: password required to execute 'route', skipping testTunnel: exit status 1
--- SKIP: TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDig (0.01s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDscacheutil (0.01s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDscacheutil
functional_test_tunnel_test.go:90: password required to execute 'route', skipping testTunnel: exit status 1
--- SKIP: TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDscacheutil (0.01s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/AccessThroughDNS (0.01s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/AccessThroughDNS
functional_test_tunnel_test.go:90: password required to execute 'route', skipping testTunnel: exit status 1
--- SKIP: TestFunctional/parallel/TunnelCmd/serial/AccessThroughDNS (0.01s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/DeleteTunnel (0.01s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/DeleteTunnel
functional_test_tunnel_test.go:90: password required to execute 'route', skipping testTunnel: exit status 1
--- SKIP: TestFunctional/parallel/TunnelCmd/serial/DeleteTunnel (0.01s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PodmanEnv (0s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PodmanEnv
=== PAUSE TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PodmanEnv

                                                
                                                

                                                
                                                
=== CONT  TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PodmanEnv
functional_test.go:565: only validate podman env with docker container runtime, currently testing docker
--- SKIP: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PodmanEnv (0.00s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/TunnelCmd/serial/RunSecondTunnel (0.01s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/TunnelCmd/serial/RunSecondTunnel
functional_test_tunnel_test.go:90: password required to execute 'route', skipping testTunnel: exit status 1
--- SKIP: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/TunnelCmd/serial/RunSecondTunnel (0.01s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/TunnelCmd/serial/StartTunnel (0.01s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/TunnelCmd/serial/StartTunnel
functional_test_tunnel_test.go:90: password required to execute 'route', skipping testTunnel: exit status 1
--- SKIP: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/TunnelCmd/serial/StartTunnel (0.01s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/TunnelCmd/serial/WaitService (0.01s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/TunnelCmd/serial/WaitService
functional_test_tunnel_test.go:90: password required to execute 'route', skipping testTunnel: exit status 1
--- SKIP: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/TunnelCmd/serial/WaitService (0.01s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/TunnelCmd/serial/AccessDirect (0.01s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/TunnelCmd/serial/AccessDirect
functional_test_tunnel_test.go:90: password required to execute 'route', skipping testTunnel: exit status 1
--- SKIP: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/TunnelCmd/serial/AccessDirect (0.01s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/TunnelCmd/serial/DNSResolutionByDig (0.01s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/TunnelCmd/serial/DNSResolutionByDig
functional_test_tunnel_test.go:90: password required to execute 'route', skipping testTunnel: exit status 1
--- SKIP: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/TunnelCmd/serial/DNSResolutionByDig (0.01s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/TunnelCmd/serial/DNSResolutionByDscacheutil (0.01s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/TunnelCmd/serial/DNSResolutionByDscacheutil
functional_test_tunnel_test.go:90: password required to execute 'route', skipping testTunnel: exit status 1
--- SKIP: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/TunnelCmd/serial/DNSResolutionByDscacheutil (0.01s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/TunnelCmd/serial/AccessThroughDNS (0.01s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/TunnelCmd/serial/AccessThroughDNS
functional_test_tunnel_test.go:90: password required to execute 'route', skipping testTunnel: exit status 1
--- SKIP: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/TunnelCmd/serial/AccessThroughDNS (0.01s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/TunnelCmd/serial/DeleteTunnel (0.01s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/TunnelCmd/serial/DeleteTunnel
functional_test_tunnel_test.go:90: password required to execute 'route', skipping testTunnel: exit status 1
--- SKIP: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/TunnelCmd/serial/DeleteTunnel (0.01s)

                                                
                                    
x
+
TestImageBuild/serial/validateImageBuildWithBuildEnv (0s)

                                                
                                                
=== RUN   TestImageBuild/serial/validateImageBuildWithBuildEnv
image_test.go:114: skipping due to https://github.com/kubernetes/minikube/issues/12431
--- SKIP: TestImageBuild/serial/validateImageBuildWithBuildEnv (0.00s)

                                                
                                    
x
+
TestKicCustomNetwork (0s)

                                                
                                                
=== RUN   TestKicCustomNetwork
kic_custom_network_test.go:34: only runs with docker driver
--- SKIP: TestKicCustomNetwork (0.00s)

                                                
                                    
x
+
TestKicExistingNetwork (0s)

                                                
                                                
=== RUN   TestKicExistingNetwork
kic_custom_network_test.go:73: only runs with docker driver
--- SKIP: TestKicExistingNetwork (0.00s)

                                                
                                    
x
+
TestKicCustomSubnet (0s)

                                                
                                                
=== RUN   TestKicCustomSubnet
kic_custom_network_test.go:102: only runs with docker/podman driver
--- SKIP: TestKicCustomSubnet (0.00s)

                                                
                                    
x
+
TestKicStaticIP (0s)

                                                
                                                
=== RUN   TestKicStaticIP
kic_custom_network_test.go:123: only run with docker/podman driver
--- SKIP: TestKicStaticIP (0.00s)

                                                
                                    
x
+
TestChangeNoneUser (0s)

                                                
                                                
=== RUN   TestChangeNoneUser
none_test.go:38: Test requires none driver and SUDO_USER env to not be empty
--- SKIP: TestChangeNoneUser (0.00s)

                                                
                                    
x
+
TestScheduledStopWindows (0s)

                                                
                                                
=== RUN   TestScheduledStopWindows
scheduled_stop_test.go:42: test only runs on windows
--- SKIP: TestScheduledStopWindows (0.00s)

                                                
                                    
x
+
TestInsufficientStorage (0s)

                                                
                                                
=== RUN   TestInsufficientStorage
status_test.go:38: only runs with docker driver
--- SKIP: TestInsufficientStorage (0.00s)

                                                
                                    
x
+
TestMissingContainerUpgrade (0s)

                                                
                                                
=== RUN   TestMissingContainerUpgrade
version_upgrade_test.go:284: This test is only for Docker
--- SKIP: TestMissingContainerUpgrade (0.00s)

                                                
                                    
x
+
TestStartStop/group/disable-driver-mounts (0.24s)

                                                
                                                
=== RUN   TestStartStop/group/disable-driver-mounts
=== PAUSE TestStartStop/group/disable-driver-mounts

                                                
                                                

                                                
                                                
=== CONT  TestStartStop/group/disable-driver-mounts
start_stop_delete_test.go:101: skipping TestStartStop/group/disable-driver-mounts - only runs on virtualbox
helpers_test.go:176: Cleaning up "disable-driver-mounts-518844" profile ...
helpers_test.go:179: (dbg) Run:  out/minikube-linux-amd64 delete -p disable-driver-mounts-518844
--- SKIP: TestStartStop/group/disable-driver-mounts (0.24s)

                                                
                                    
x
+
TestNetworkPlugins/group/cilium (4.14s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/cilium
net_test.go:102: Skipping the test as it's interfering with other tests and is outdated
panic.go:615: 
----------------------- debugLogs start: cilium-949855 [pass: true] --------------------------------
>>> netcat: nslookup kubernetes.default:
Error in configuration: context was not found for specified context: cilium-949855

                                                
                                                

                                                
                                                
>>> netcat: nslookup debug kubernetes.default a-records:
Error in configuration: context was not found for specified context: cilium-949855

                                                
                                                

                                                
                                                
>>> netcat: dig search kubernetes.default:
Error in configuration: context was not found for specified context: cilium-949855

                                                
                                                

                                                
                                                
>>> netcat: dig @10.96.0.10 kubernetes.default.svc.cluster.local udp/53:
Error in configuration: context was not found for specified context: cilium-949855

                                                
                                                

                                                
                                                
>>> netcat: dig @10.96.0.10 kubernetes.default.svc.cluster.local tcp/53:
Error in configuration: context was not found for specified context: cilium-949855

                                                
                                                

                                                
                                                
>>> netcat: nc 10.96.0.10 udp/53:
Error in configuration: context was not found for specified context: cilium-949855

                                                
                                                

                                                
                                                
>>> netcat: nc 10.96.0.10 tcp/53:
Error in configuration: context was not found for specified context: cilium-949855

                                                
                                                

                                                
                                                
>>> netcat: /etc/nsswitch.conf:
Error in configuration: context was not found for specified context: cilium-949855

                                                
                                                

                                                
                                                
>>> netcat: /etc/hosts:
Error in configuration: context was not found for specified context: cilium-949855

                                                
                                                

                                                
                                                
>>> netcat: /etc/resolv.conf:
Error in configuration: context was not found for specified context: cilium-949855

                                                
                                                

                                                
                                                
>>> host: /etc/nsswitch.conf:
* Profile "cilium-949855" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-949855"

                                                
                                                

                                                
                                                
>>> host: /etc/hosts:
* Profile "cilium-949855" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-949855"

                                                
                                                

                                                
                                                
>>> host: /etc/resolv.conf:
* Profile "cilium-949855" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-949855"

                                                
                                                

                                                
                                                
>>> k8s: nodes, services, endpoints, daemon sets, deployments and pods, :
Error in configuration: context was not found for specified context: cilium-949855

                                                
                                                

                                                
                                                
>>> host: crictl pods:
* Profile "cilium-949855" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-949855"

                                                
                                                

                                                
                                                
>>> host: crictl containers:
* Profile "cilium-949855" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-949855"

                                                
                                                

                                                
                                                
>>> k8s: describe netcat deployment:
error: context "cilium-949855" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe netcat pod(s):
error: context "cilium-949855" does not exist

                                                
                                                

                                                
                                                
>>> k8s: netcat logs:
error: context "cilium-949855" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe coredns deployment:
error: context "cilium-949855" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe coredns pods:
error: context "cilium-949855" does not exist

                                                
                                                

                                                
                                                
>>> k8s: coredns logs:
error: context "cilium-949855" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe api server pod(s):
error: context "cilium-949855" does not exist

                                                
                                                

                                                
                                                
>>> k8s: api server logs:
error: context "cilium-949855" does not exist

                                                
                                                

                                                
                                                
>>> host: /etc/cni:
* Profile "cilium-949855" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-949855"

                                                
                                                

                                                
                                                
>>> host: ip a s:
* Profile "cilium-949855" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-949855"

                                                
                                                

                                                
                                                
>>> host: ip r s:
* Profile "cilium-949855" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-949855"

                                                
                                                

                                                
                                                
>>> host: iptables-save:
* Profile "cilium-949855" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-949855"

                                                
                                                

                                                
                                                
>>> host: iptables table nat:
* Profile "cilium-949855" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-949855"

                                                
                                                

                                                
                                                
>>> k8s: describe cilium daemon set:
Error in configuration: context was not found for specified context: cilium-949855

                                                
                                                

                                                
                                                
>>> k8s: describe cilium daemon set pod(s):
Error in configuration: context was not found for specified context: cilium-949855

                                                
                                                

                                                
                                                
>>> k8s: cilium daemon set container(s) logs (current):
error: context "cilium-949855" does not exist

                                                
                                                

                                                
                                                
>>> k8s: cilium daemon set container(s) logs (previous):
error: context "cilium-949855" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe cilium deployment:
Error in configuration: context was not found for specified context: cilium-949855

                                                
                                                

                                                
                                                
>>> k8s: describe cilium deployment pod(s):
Error in configuration: context was not found for specified context: cilium-949855

                                                
                                                

                                                
                                                
>>> k8s: cilium deployment container(s) logs (current):
error: context "cilium-949855" does not exist

                                                
                                                

                                                
                                                
>>> k8s: cilium deployment container(s) logs (previous):
error: context "cilium-949855" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe kube-proxy daemon set:
error: context "cilium-949855" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe kube-proxy pod(s):
error: context "cilium-949855" does not exist

                                                
                                                

                                                
                                                
>>> k8s: kube-proxy logs:
error: context "cilium-949855" does not exist

                                                
                                                

                                                
                                                
>>> host: kubelet daemon status:
* Profile "cilium-949855" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-949855"

                                                
                                                

                                                
                                                
>>> host: kubelet daemon config:
* Profile "cilium-949855" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-949855"

                                                
                                                

                                                
                                                
>>> k8s: kubelet logs:
* Profile "cilium-949855" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-949855"

                                                
                                                

                                                
                                                
>>> host: /etc/kubernetes/kubelet.conf:
* Profile "cilium-949855" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-949855"

                                                
                                                

                                                
                                                
>>> host: /var/lib/kubelet/config.yaml:
* Profile "cilium-949855" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-949855"

                                                
                                                

                                                
                                                
>>> k8s: kubectl config:
apiVersion: v1
clusters: null
contexts: null
current-context: ""
kind: Config
users: null

                                                
                                                

                                                
                                                
>>> k8s: cms:
Error in configuration: context was not found for specified context: cilium-949855

                                                
                                                

                                                
                                                
>>> host: docker daemon status:
* Profile "cilium-949855" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-949855"

                                                
                                                

                                                
                                                
>>> host: docker daemon config:
* Profile "cilium-949855" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-949855"

                                                
                                                

                                                
                                                
>>> host: /etc/docker/daemon.json:
* Profile "cilium-949855" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-949855"

                                                
                                                

                                                
                                                
>>> host: docker system info:
* Profile "cilium-949855" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-949855"

                                                
                                                

                                                
                                                
>>> host: cri-docker daemon status:
* Profile "cilium-949855" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-949855"

                                                
                                                

                                                
                                                
>>> host: cri-docker daemon config:
* Profile "cilium-949855" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-949855"

                                                
                                                

                                                
                                                
>>> host: /etc/systemd/system/cri-docker.service.d/10-cni.conf:
* Profile "cilium-949855" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-949855"

                                                
                                                

                                                
                                                
>>> host: /usr/lib/systemd/system/cri-docker.service:
* Profile "cilium-949855" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-949855"

                                                
                                                

                                                
                                                
>>> host: cri-dockerd version:
* Profile "cilium-949855" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-949855"

                                                
                                                

                                                
                                                
>>> host: containerd daemon status:
* Profile "cilium-949855" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-949855"

                                                
                                                

                                                
                                                
>>> host: containerd daemon config:
* Profile "cilium-949855" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-949855"

                                                
                                                

                                                
                                                
>>> host: /lib/systemd/system/containerd.service:
* Profile "cilium-949855" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-949855"

                                                
                                                

                                                
                                                
>>> host: /etc/containerd/config.toml:
* Profile "cilium-949855" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-949855"

                                                
                                                

                                                
                                                
>>> host: containerd config dump:
* Profile "cilium-949855" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-949855"

                                                
                                                

                                                
                                                
>>> host: crio daemon status:
* Profile "cilium-949855" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-949855"

                                                
                                                

                                                
                                                
>>> host: crio daemon config:
* Profile "cilium-949855" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-949855"

                                                
                                                

                                                
                                                
>>> host: /etc/crio:
* Profile "cilium-949855" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-949855"

                                                
                                                

                                                
                                                
>>> host: crio config:
* Profile "cilium-949855" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-949855"

                                                
                                                
----------------------- debugLogs end: cilium-949855 [took: 3.941213028s] --------------------------------
helpers_test.go:176: Cleaning up "cilium-949855" profile ...
helpers_test.go:179: (dbg) Run:  out/minikube-linux-amd64 delete -p cilium-949855
--- SKIP: TestNetworkPlugins/group/cilium (4.14s)

                                                
                                    
Copied to clipboard