Test Report: KVM_Docker_Linux_x86 22241

                    
                      7cd9f41b7421760cf1f1eaa8725bdb975037b06d:2025-12-20
                    
                

Test fail (1/456)

Order failed test Duration
490 TestStartStop/group/default-k8s-diff-port/serial/Pause 41.63
x
+
TestStartStop/group/default-k8s-diff-port/serial/Pause (41.63s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/Pause
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-amd64 pause -p default-k8s-diff-port-032958 --alsologtostderr -v=1
start_stop_delete_test.go:309: (dbg) Done: out/minikube-linux-amd64 pause -p default-k8s-diff-port-032958 --alsologtostderr -v=1: (1.809165999s)
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p default-k8s-diff-port-032958 -n default-k8s-diff-port-032958
start_stop_delete_test.go:309: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.APIServer}} -p default-k8s-diff-port-032958 -n default-k8s-diff-port-032958: exit status 2 (15.850844873s)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:309: status error: exit status 2 (may be ok)
start_stop_delete_test.go:309: post-pause apiserver status = "Stopped"; want = "Paused"
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Kubelet}} -p default-k8s-diff-port-032958 -n default-k8s-diff-port-032958
E1220 02:14:11.117451   13018 cert_rotation.go:172] "Loading client cert failed" err="open /home/minitest/minikube-integration/7cd9f41b7421760cf1f1eaa8725bdb975037b06d-7160/.minikube/profiles/old-k8s-version-146675/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1220 02:14:11.122831   13018 cert_rotation.go:172] "Loading client cert failed" err="open /home/minitest/minikube-integration/7cd9f41b7421760cf1f1eaa8725bdb975037b06d-7160/.minikube/profiles/old-k8s-version-146675/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1220 02:14:11.133306   13018 cert_rotation.go:172] "Loading client cert failed" err="open /home/minitest/minikube-integration/7cd9f41b7421760cf1f1eaa8725bdb975037b06d-7160/.minikube/profiles/old-k8s-version-146675/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1220 02:14:11.153784   13018 cert_rotation.go:172] "Loading client cert failed" err="open /home/minitest/minikube-integration/7cd9f41b7421760cf1f1eaa8725bdb975037b06d-7160/.minikube/profiles/old-k8s-version-146675/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1220 02:14:11.194035   13018 cert_rotation.go:172] "Loading client cert failed" err="open /home/minitest/minikube-integration/7cd9f41b7421760cf1f1eaa8725bdb975037b06d-7160/.minikube/profiles/old-k8s-version-146675/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1220 02:14:11.274437   13018 cert_rotation.go:172] "Loading client cert failed" err="open /home/minitest/minikube-integration/7cd9f41b7421760cf1f1eaa8725bdb975037b06d-7160/.minikube/profiles/old-k8s-version-146675/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1220 02:14:11.434883   13018 cert_rotation.go:172] "Loading client cert failed" err="open /home/minitest/minikube-integration/7cd9f41b7421760cf1f1eaa8725bdb975037b06d-7160/.minikube/profiles/old-k8s-version-146675/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1220 02:14:11.755911   13018 cert_rotation.go:172] "Loading client cert failed" err="open /home/minitest/minikube-integration/7cd9f41b7421760cf1f1eaa8725bdb975037b06d-7160/.minikube/profiles/old-k8s-version-146675/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1220 02:14:12.397056   13018 cert_rotation.go:172] "Loading client cert failed" err="open /home/minitest/minikube-integration/7cd9f41b7421760cf1f1eaa8725bdb975037b06d-7160/.minikube/profiles/old-k8s-version-146675/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1220 02:14:13.677687   13018 cert_rotation.go:172] "Loading client cert failed" err="open /home/minitest/minikube-integration/7cd9f41b7421760cf1f1eaa8725bdb975037b06d-7160/.minikube/profiles/old-k8s-version-146675/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1220 02:14:16.238180   13018 cert_rotation.go:172] "Loading client cert failed" err="open /home/minitest/minikube-integration/7cd9f41b7421760cf1f1eaa8725bdb975037b06d-7160/.minikube/profiles/old-k8s-version-146675/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
start_stop_delete_test.go:309: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Kubelet}} -p default-k8s-diff-port-032958 -n default-k8s-diff-port-032958: exit status 2 (15.842115077s)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:309: status error: exit status 2 (may be ok)
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-amd64 unpause -p default-k8s-diff-port-032958 --alsologtostderr -v=1
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p default-k8s-diff-port-032958 -n default-k8s-diff-port-032958
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Kubelet}} -p default-k8s-diff-port-032958 -n default-k8s-diff-port-032958
helpers_test.go:223: -----------------------post-mortem--------------------------------
helpers_test.go:224: ======>  post-mortem[TestStartStop/group/default-k8s-diff-port/serial/Pause]: network settings <======
helpers_test.go:231: HOST ENV snapshots: PROXY env: HTTP_PROXY="<empty>" HTTPS_PROXY="<empty>" NO_PROXY="<empty>"
helpers_test.go:248: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p default-k8s-diff-port-032958 -n default-k8s-diff-port-032958
helpers_test.go:253: <<< TestStartStop/group/default-k8s-diff-port/serial/Pause FAILED: start of post-mortem logs <<<
helpers_test.go:254: ======>  post-mortem[TestStartStop/group/default-k8s-diff-port/serial/Pause]: minikube logs <======
helpers_test.go:256: (dbg) Run:  out/minikube-linux-amd64 -p default-k8s-diff-port-032958 logs -n 25
E1220 02:14:21.358406   13018 cert_rotation.go:172] "Loading client cert failed" err="open /home/minitest/minikube-integration/7cd9f41b7421760cf1f1eaa8725bdb975037b06d-7160/.minikube/profiles/old-k8s-version-146675/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
helpers_test.go:256: (dbg) Done: out/minikube-linux-amd64 -p default-k8s-diff-port-032958 logs -n 25: (3.152680402s)
helpers_test.go:261: TestStartStop/group/default-k8s-diff-port/serial/Pause logs: 
-- stdout --
	
	==> Audit <==
	┌─────────┬──────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┬──────────────────────────────┬──────────┬─────────┬─────────────────────┬─────────────────────┐
	│ COMMAND │                                                                 ARGS                                                                 │           PROFILE            │   USER   │ VERSION │     START TIME      │      END TIME       │
	├─────────┼──────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┼──────────────────────────────┼──────────┼─────────┼─────────────────────┼─────────────────────┤
	│ ssh     │ -p kindnet-503505 sudo systemctl cat kubelet --no-pager                                                                              │ kindnet-503505               │ minitest │ v1.37.0 │ 20 Dec 25 02:13 UTC │ 20 Dec 25 02:13 UTC │
	│ ssh     │ -p kindnet-503505 sudo journalctl -xeu kubelet --all --full --no-pager                                                               │ kindnet-503505               │ minitest │ v1.37.0 │ 20 Dec 25 02:13 UTC │ 20 Dec 25 02:13 UTC │
	│ ssh     │ -p kindnet-503505 sudo cat /etc/kubernetes/kubelet.conf                                                                              │ kindnet-503505               │ minitest │ v1.37.0 │ 20 Dec 25 02:13 UTC │ 20 Dec 25 02:13 UTC │
	│ ssh     │ -p kindnet-503505 sudo cat /var/lib/kubelet/config.yaml                                                                              │ kindnet-503505               │ minitest │ v1.37.0 │ 20 Dec 25 02:13 UTC │ 20 Dec 25 02:13 UTC │
	│ ssh     │ -p kindnet-503505 sudo systemctl status docker --all --full --no-pager                                                               │ kindnet-503505               │ minitest │ v1.37.0 │ 20 Dec 25 02:13 UTC │ 20 Dec 25 02:13 UTC │
	│ ssh     │ -p kindnet-503505 sudo systemctl cat docker --no-pager                                                                               │ kindnet-503505               │ minitest │ v1.37.0 │ 20 Dec 25 02:13 UTC │ 20 Dec 25 02:13 UTC │
	│ ssh     │ -p kindnet-503505 sudo cat /etc/docker/daemon.json                                                                                   │ kindnet-503505               │ minitest │ v1.37.0 │ 20 Dec 25 02:13 UTC │ 20 Dec 25 02:13 UTC │
	│ ssh     │ -p kindnet-503505 sudo docker system info                                                                                            │ kindnet-503505               │ minitest │ v1.37.0 │ 20 Dec 25 02:13 UTC │ 20 Dec 25 02:13 UTC │
	│ ssh     │ -p kindnet-503505 sudo systemctl status cri-docker --all --full --no-pager                                                           │ kindnet-503505               │ minitest │ v1.37.0 │ 20 Dec 25 02:13 UTC │ 20 Dec 25 02:13 UTC │
	│ ssh     │ -p kindnet-503505 sudo systemctl cat cri-docker --no-pager                                                                           │ kindnet-503505               │ minitest │ v1.37.0 │ 20 Dec 25 02:13 UTC │ 20 Dec 25 02:13 UTC │
	│ ssh     │ -p kindnet-503505 sudo cat /etc/systemd/system/cri-docker.service.d/10-cni.conf                                                      │ kindnet-503505               │ minitest │ v1.37.0 │ 20 Dec 25 02:13 UTC │ 20 Dec 25 02:13 UTC │
	│ ssh     │ -p kindnet-503505 sudo cat /usr/lib/systemd/system/cri-docker.service                                                                │ kindnet-503505               │ minitest │ v1.37.0 │ 20 Dec 25 02:13 UTC │ 20 Dec 25 02:13 UTC │
	│ ssh     │ -p kindnet-503505 sudo cri-dockerd --version                                                                                         │ kindnet-503505               │ minitest │ v1.37.0 │ 20 Dec 25 02:13 UTC │ 20 Dec 25 02:13 UTC │
	│ ssh     │ -p kindnet-503505 sudo systemctl status containerd --all --full --no-pager                                                           │ kindnet-503505               │ minitest │ v1.37.0 │ 20 Dec 25 02:13 UTC │ 20 Dec 25 02:13 UTC │
	│ ssh     │ -p kindnet-503505 sudo systemctl cat containerd --no-pager                                                                           │ kindnet-503505               │ minitest │ v1.37.0 │ 20 Dec 25 02:13 UTC │ 20 Dec 25 02:13 UTC │
	│ ssh     │ -p kindnet-503505 sudo cat /lib/systemd/system/containerd.service                                                                    │ kindnet-503505               │ minitest │ v1.37.0 │ 20 Dec 25 02:13 UTC │ 20 Dec 25 02:13 UTC │
	│ ssh     │ -p kindnet-503505 sudo cat /etc/containerd/config.toml                                                                               │ kindnet-503505               │ minitest │ v1.37.0 │ 20 Dec 25 02:13 UTC │ 20 Dec 25 02:13 UTC │
	│ ssh     │ -p kindnet-503505 sudo containerd config dump                                                                                        │ kindnet-503505               │ minitest │ v1.37.0 │ 20 Dec 25 02:13 UTC │ 20 Dec 25 02:13 UTC │
	│ ssh     │ -p kindnet-503505 sudo systemctl status crio --all --full --no-pager                                                                 │ kindnet-503505               │ minitest │ v1.37.0 │ 20 Dec 25 02:13 UTC │                     │
	│ ssh     │ -p kindnet-503505 sudo systemctl cat crio --no-pager                                                                                 │ kindnet-503505               │ minitest │ v1.37.0 │ 20 Dec 25 02:13 UTC │ 20 Dec 25 02:13 UTC │
	│ ssh     │ -p kindnet-503505 sudo find /etc/crio -type f -exec sh -c 'echo {}; cat {}' \;                                                       │ kindnet-503505               │ minitest │ v1.37.0 │ 20 Dec 25 02:13 UTC │ 20 Dec 25 02:13 UTC │
	│ ssh     │ -p kindnet-503505 sudo crio config                                                                                                   │ kindnet-503505               │ minitest │ v1.37.0 │ 20 Dec 25 02:13 UTC │ 20 Dec 25 02:13 UTC │
	│ delete  │ -p kindnet-503505                                                                                                                    │ kindnet-503505               │ minitest │ v1.37.0 │ 20 Dec 25 02:13 UTC │ 20 Dec 25 02:13 UTC │
	│ start   │ -p false-503505 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=false --driver=kvm2  --container-runtime=docker │ false-503505                 │ minitest │ v1.37.0 │ 20 Dec 25 02:13 UTC │                     │
	│ unpause │ -p default-k8s-diff-port-032958 --alsologtostderr -v=1                                                                               │ default-k8s-diff-port-032958 │ minitest │ v1.37.0 │ 20 Dec 25 02:14 UTC │ 20 Dec 25 02:14 UTC │
	└─────────┴──────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┴──────────────────────────────┴──────────┴─────────┴─────────────────────┴─────────────────────┘
	
	
	==> Last Start <==
	Log file created at: 2025/12/20 02:13:53
	Running on machine: minitest-vm-9d09530a
	Binary: Built with gc go1.25.5 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1220 02:13:53.658426   38979 out.go:360] Setting OutFile to fd 1 ...
	I1220 02:13:53.658597   38979 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1220 02:13:53.658612   38979 out.go:374] Setting ErrFile to fd 2...
	I1220 02:13:53.658620   38979 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1220 02:13:53.658880   38979 root.go:338] Updating PATH: /home/minitest/minikube-integration/7cd9f41b7421760cf1f1eaa8725bdb975037b06d-7160/.minikube/bin
	I1220 02:13:53.659482   38979 out.go:368] Setting JSON to false
	I1220 02:13:53.660578   38979 start.go:133] hostinfo: {"hostname":"minitest-vm-9d09530a.c.k8s-infra-e2e-boskos-103.internal","uptime":3547,"bootTime":1766193287,"procs":209,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"24.04","kernelVersion":"6.14.0-1021-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"324b1d65-3a78-4886-9ab4-95ed3c96a31c"}
	I1220 02:13:53.660687   38979 start.go:143] virtualization: kvm guest
	I1220 02:13:53.662866   38979 out.go:179] * [false-503505] minikube v1.37.0 on Ubuntu 24.04 (kvm/amd64)
	I1220 02:13:53.664260   38979 notify.go:221] Checking for updates...
	I1220 02:13:53.664290   38979 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1220 02:13:53.665824   38979 out.go:179]   - KUBECONFIG=/home/minitest/minikube-integration/7cd9f41b7421760cf1f1eaa8725bdb975037b06d-7160/kubeconfig
	I1220 02:13:53.667283   38979 out.go:179]   - MINIKUBE_HOME=/home/minitest/minikube-integration/7cd9f41b7421760cf1f1eaa8725bdb975037b06d-7160/.minikube
	I1220 02:13:53.668904   38979 out.go:179]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I1220 02:13:53.670341   38979 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I1220 02:13:53.672156   38979 config.go:182] Loaded profile config "calico-503505": Driver=kvm2, ContainerRuntime=docker, KubernetesVersion=v1.34.3
	I1220 02:13:53.672297   38979 config.go:182] Loaded profile config "custom-flannel-503505": Driver=kvm2, ContainerRuntime=docker, KubernetesVersion=v1.34.3
	I1220 02:13:53.672434   38979 config.go:182] Loaded profile config "default-k8s-diff-port-032958": Driver=kvm2, ContainerRuntime=docker, KubernetesVersion=v1.34.3
	I1220 02:13:53.672545   38979 config.go:182] Loaded profile config "guest-073858": Driver=kvm2, ContainerRuntime=docker, KubernetesVersion=v0.0.0
	I1220 02:13:53.672679   38979 driver.go:422] Setting default libvirt URI to qemu:///system
	I1220 02:13:53.714352   38979 out.go:179] * Using the kvm2 driver based on user configuration
	I1220 02:13:53.715582   38979 start.go:309] selected driver: kvm2
	I1220 02:13:53.715609   38979 start.go:928] validating driver "kvm2" against <nil>
	I1220 02:13:53.715626   38979 start.go:939] status for kvm2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1220 02:13:53.716847   38979 start_flags.go:329] no existing cluster config was found, will generate one from the flags 
	I1220 02:13:53.717254   38979 start_flags.go:995] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1220 02:13:53.717297   38979 cni.go:84] Creating CNI manager for "false"
	I1220 02:13:53.717349   38979 start.go:353] cluster config:
	{Name:false-503505 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765966054-22186@sha256:1c173489767e6632c410d2554f1a2272f032a423dd528157e201daadfe3c43f0 Memory:3072 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.3 ClusterName:false-503505 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:doc
ker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:false} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.34.3 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GP
Us: AutoPauseInterval:1m0s}
	I1220 02:13:53.717508   38979 iso.go:125] acquiring lock: {Name:mk8cff2fd2ec419d0f1f974993910ae0235f0b9c Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1220 02:13:53.719137   38979 out.go:179] * Starting "false-503505" primary control-plane node in "false-503505" cluster
	I1220 02:13:53.720475   38979 preload.go:188] Checking if preload exists for k8s version v1.34.3 and runtime docker
	I1220 02:13:53.720519   38979 preload.go:203] Found local preload: /home/minitest/minikube-integration/7cd9f41b7421760cf1f1eaa8725bdb975037b06d-7160/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.3-docker-overlay2-amd64.tar.lz4
	I1220 02:13:53.720529   38979 cache.go:65] Caching tarball of preloaded images
	I1220 02:13:53.720653   38979 preload.go:251] Found /home/minitest/minikube-integration/7cd9f41b7421760cf1f1eaa8725bdb975037b06d-7160/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.3-docker-overlay2-amd64.tar.lz4 in cache, skipping download
	I1220 02:13:53.720670   38979 cache.go:68] Finished verifying existence of preloaded tar for v1.34.3 on docker
	I1220 02:13:53.720801   38979 profile.go:143] Saving config to /home/minitest/minikube-integration/7cd9f41b7421760cf1f1eaa8725bdb975037b06d-7160/.minikube/profiles/false-503505/config.json ...
	I1220 02:13:53.720830   38979 lock.go:35] WriteFile acquiring /home/minitest/minikube-integration/7cd9f41b7421760cf1f1eaa8725bdb975037b06d-7160/.minikube/profiles/false-503505/config.json: {Name:mkc8b6869a0bb6c3a942663395236fb8c2775a51 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1220 02:13:53.721027   38979 start.go:360] acquireMachinesLock for false-503505: {Name:mkeb3229b5d18611c16c8e938b31492b9b6546b6 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I1220 02:13:53.721080   38979 start.go:364] duration metric: took 32.113µs to acquireMachinesLock for "false-503505"
	I1220 02:13:53.721108   38979 start.go:93] Provisioning new machine with config: &{Name:false-503505 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/22186/minikube-v1.37.0-1765965980-22186-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765966054-22186@sha256:1c173489767e6632c410d2554f1a2272f032a423dd528157e201daadfe3c43f0 Memory:3072 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kubern
etesVersion:v1.34.3 ClusterName:false-503505 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:false} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.34.3 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryM
irror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.34.3 ContainerRuntime:docker ControlPlane:true Worker:true}
	I1220 02:13:53.721191   38979 start.go:125] createHost starting for "" (driver="kvm2")
	I1220 02:13:53.104657   37878 kubeadm.go:319] [certs] Generating "front-proxy-client" certificate and key
	I1220 02:13:53.203649   37878 kubeadm.go:319] [certs] Generating "etcd/ca" certificate and key
	I1220 02:13:53.414002   37878 kubeadm.go:319] [certs] Generating "etcd/server" certificate and key
	I1220 02:13:53.414235   37878 kubeadm.go:319] [certs] etcd/server serving cert is signed for DNS names [custom-flannel-503505 localhost] and IPs [192.168.72.110 127.0.0.1 ::1]
	I1220 02:13:53.718885   37878 kubeadm.go:319] [certs] Generating "etcd/peer" certificate and key
	I1220 02:13:53.719606   37878 kubeadm.go:319] [certs] etcd/peer serving cert is signed for DNS names [custom-flannel-503505 localhost] and IPs [192.168.72.110 127.0.0.1 ::1]
	I1220 02:13:54.333369   37878 kubeadm.go:319] [certs] Generating "etcd/healthcheck-client" certificate and key
	I1220 02:13:54.424119   37878 kubeadm.go:319] [certs] Generating "apiserver-etcd-client" certificate and key
	I1220 02:13:54.440070   37878 kubeadm.go:319] [certs] Generating "sa" key and public key
	I1220 02:13:54.440221   37878 kubeadm.go:319] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I1220 02:13:54.643883   37878 kubeadm.go:319] [kubeconfig] Writing "admin.conf" kubeconfig file
	I1220 02:13:54.882013   37878 kubeadm.go:319] [kubeconfig] Writing "super-admin.conf" kubeconfig file
	I1220 02:13:54.904688   37878 kubeadm.go:319] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I1220 02:13:55.025586   37878 kubeadm.go:319] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I1220 02:13:55.145485   37878 kubeadm.go:319] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I1220 02:13:55.145626   37878 kubeadm.go:319] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I1220 02:13:55.148326   37878 kubeadm.go:319] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	W1220 02:13:54.723698   37762 node_ready.go:57] node "calico-503505" has "Ready":"False" status (will retry)
	W1220 02:13:57.088471   37762 node_ready.go:57] node "calico-503505" has "Ready":"False" status (will retry)
	I1220 02:13:55.150289   37878 out.go:252]   - Booting up control plane ...
	I1220 02:13:55.150458   37878 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I1220 02:13:55.151333   37878 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I1220 02:13:55.152227   37878 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I1220 02:13:55.175699   37878 kubeadm.go:319] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I1220 02:13:55.175981   37878 kubeadm.go:319] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	I1220 02:13:55.186275   37878 kubeadm.go:319] [patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	I1220 02:13:55.186852   37878 kubeadm.go:319] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I1220 02:13:55.186945   37878 kubeadm.go:319] [kubelet-start] Starting the kubelet
	I1220 02:13:55.443272   37878 kubeadm.go:319] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	I1220 02:13:55.443453   37878 kubeadm.go:319] [kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	I1220 02:13:57.443421   37878 kubeadm.go:319] [kubelet-check] The kubelet is healthy after 2.001962214s
	I1220 02:13:57.453249   37878 kubeadm.go:319] [control-plane-check] Waiting for healthy control plane components. This can take up to 4m0s
	I1220 02:13:57.453392   37878 kubeadm.go:319] [control-plane-check] Checking kube-apiserver at https://192.168.72.110:8443/livez
	I1220 02:13:57.453521   37878 kubeadm.go:319] [control-plane-check] Checking kube-controller-manager at https://127.0.0.1:10257/healthz
	I1220 02:13:57.453636   37878 kubeadm.go:319] [control-plane-check] Checking kube-scheduler at https://127.0.0.1:10259/livez
	I1220 02:13:53.723129   38979 out.go:252] * Creating kvm2 VM (CPUs=2, Memory=3072MB, Disk=20000MB) ...
	I1220 02:13:53.723383   38979 start.go:159] libmachine.API.Create for "false-503505" (driver="kvm2")
	I1220 02:13:53.723423   38979 client.go:173] LocalClient.Create starting
	I1220 02:13:53.723510   38979 main.go:144] libmachine: Reading certificate data from /home/minitest/minikube-integration/7cd9f41b7421760cf1f1eaa8725bdb975037b06d-7160/.minikube/certs/ca.pem
	I1220 02:13:53.723557   38979 main.go:144] libmachine: Decoding PEM data...
	I1220 02:13:53.723581   38979 main.go:144] libmachine: Parsing certificate...
	I1220 02:13:53.723676   38979 main.go:144] libmachine: Reading certificate data from /home/minitest/minikube-integration/7cd9f41b7421760cf1f1eaa8725bdb975037b06d-7160/.minikube/certs/cert.pem
	I1220 02:13:53.723706   38979 main.go:144] libmachine: Decoding PEM data...
	I1220 02:13:53.723725   38979 main.go:144] libmachine: Parsing certificate...
	I1220 02:13:53.724182   38979 main.go:144] libmachine: creating domain...
	I1220 02:13:53.724217   38979 main.go:144] libmachine: creating network...
	I1220 02:13:53.725920   38979 main.go:144] libmachine: found existing default network
	I1220 02:13:53.726255   38979 main.go:144] libmachine: <network connections='4'>
	  <name>default</name>
	  <uuid>650ca552-1913-49ac-a1fd-736d0c584a06</uuid>
	  <forward mode='nat'>
	    <nat>
	      <port start='1024' end='65535'/>
	    </nat>
	  </forward>
	  <bridge name='virbr0' stp='on' delay='0'/>
	  <mac address='52:54:00:de:58:ff'/>
	  <ip address='192.168.122.1' netmask='255.255.255.0'>
	    <dhcp>
	      <range start='192.168.122.2' end='192.168.122.254'/>
	    </dhcp>
	  </ip>
	</network>
	
	I1220 02:13:53.727630   38979 network.go:211] skipping subnet 192.168.39.0/24 that is taken: &{IP:192.168.39.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.39.0/24 Gateway:192.168.39.1 ClientMin:192.168.39.2 ClientMax:192.168.39.254 Broadcast:192.168.39.255 IsPrivate:true Interface:{IfaceName:virbr1 IfaceIPv4:192.168.39.1 IfaceMTU:1500 IfaceMAC:52:54:00:e8:02:c4} reservation:<nil>}
	I1220 02:13:53.728421   38979 network.go:211] skipping subnet 192.168.50.0/24 that is taken: &{IP:192.168.50.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.50.0/24 Gateway:192.168.50.1 ClientMin:192.168.50.2 ClientMax:192.168.50.254 Broadcast:192.168.50.255 IsPrivate:true Interface:{IfaceName:virbr2 IfaceIPv4:192.168.50.1 IfaceMTU:1500 IfaceMAC:52:54:00:8b:7d:ff} reservation:<nil>}
	I1220 02:13:53.729869   38979 network.go:206] using free private subnet 192.168.61.0/24: &{IP:192.168.61.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.61.0/24 Gateway:192.168.61.1 ClientMin:192.168.61.2 ClientMax:192.168.61.254 Broadcast:192.168.61.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:0xc001aac760}
	I1220 02:13:53.729965   38979 main.go:144] libmachine: defining private network:
	
	<network>
	  <name>mk-false-503505</name>
	  <dns enable='no'/>
	  <ip address='192.168.61.1' netmask='255.255.255.0'>
	    <dhcp>
	      <range start='192.168.61.2' end='192.168.61.253'/>
	    </dhcp>
	  </ip>
	</network>
	
	I1220 02:13:53.736168   38979 main.go:144] libmachine: creating private network mk-false-503505 192.168.61.0/24...
	I1220 02:13:53.810612   38979 main.go:144] libmachine: private network mk-false-503505 192.168.61.0/24 created
	I1220 02:13:53.810976   38979 main.go:144] libmachine: <network>
	  <name>mk-false-503505</name>
	  <uuid>145d091e-eda6-4cfe-8946-ea394cfc6f9d</uuid>
	  <bridge name='virbr3' stp='on' delay='0'/>
	  <mac address='52:54:00:b5:b9:98'/>
	  <dns enable='no'/>
	  <ip address='192.168.61.1' netmask='255.255.255.0'>
	    <dhcp>
	      <range start='192.168.61.2' end='192.168.61.253'/>
	    </dhcp>
	  </ip>
	</network>
	
	I1220 02:13:53.811017   38979 main.go:144] libmachine: setting up store path in /home/minitest/minikube-integration/7cd9f41b7421760cf1f1eaa8725bdb975037b06d-7160/.minikube/machines/false-503505 ...
	I1220 02:13:53.811066   38979 main.go:144] libmachine: building disk image from file:///home/minitest/minikube-integration/7cd9f41b7421760cf1f1eaa8725bdb975037b06d-7160/.minikube/cache/iso/amd64/minikube-v1.37.0-1765965980-22186-amd64.iso
	I1220 02:13:53.811082   38979 common.go:152] Making disk image using store path: /home/minitest/minikube-integration/7cd9f41b7421760cf1f1eaa8725bdb975037b06d-7160/.minikube
	I1220 02:13:53.811185   38979 main.go:144] libmachine: Downloading /home/minitest/minikube-integration/7cd9f41b7421760cf1f1eaa8725bdb975037b06d-7160/.minikube/cache/boot2docker.iso from file:///home/minitest/minikube-integration/7cd9f41b7421760cf1f1eaa8725bdb975037b06d-7160/.minikube/cache/iso/amd64/minikube-v1.37.0-1765965980-22186-amd64.iso...
	I1220 02:13:54.101881   38979 common.go:159] Creating ssh key: /home/minitest/minikube-integration/7cd9f41b7421760cf1f1eaa8725bdb975037b06d-7160/.minikube/machines/false-503505/id_rsa...
	I1220 02:13:54.171818   38979 common.go:165] Creating raw disk image: /home/minitest/minikube-integration/7cd9f41b7421760cf1f1eaa8725bdb975037b06d-7160/.minikube/machines/false-503505/false-503505.rawdisk...
	I1220 02:13:54.171860   38979 main.go:144] libmachine: Writing magic tar header
	I1220 02:13:54.171878   38979 main.go:144] libmachine: Writing SSH key tar header
	I1220 02:13:54.171952   38979 common.go:179] Fixing permissions on /home/minitest/minikube-integration/7cd9f41b7421760cf1f1eaa8725bdb975037b06d-7160/.minikube/machines/false-503505 ...
	I1220 02:13:54.172017   38979 main.go:144] libmachine: checking permissions on dir: /home/minitest/minikube-integration/7cd9f41b7421760cf1f1eaa8725bdb975037b06d-7160/.minikube/machines/false-503505
	I1220 02:13:54.172042   38979 main.go:144] libmachine: setting executable bit set on /home/minitest/minikube-integration/7cd9f41b7421760cf1f1eaa8725bdb975037b06d-7160/.minikube/machines/false-503505 (perms=drwx------)
	I1220 02:13:54.172055   38979 main.go:144] libmachine: checking permissions on dir: /home/minitest/minikube-integration/7cd9f41b7421760cf1f1eaa8725bdb975037b06d-7160/.minikube/machines
	I1220 02:13:54.172068   38979 main.go:144] libmachine: setting executable bit set on /home/minitest/minikube-integration/7cd9f41b7421760cf1f1eaa8725bdb975037b06d-7160/.minikube/machines (perms=drwxr-xr-x)
	I1220 02:13:54.172080   38979 main.go:144] libmachine: checking permissions on dir: /home/minitest/minikube-integration/7cd9f41b7421760cf1f1eaa8725bdb975037b06d-7160/.minikube
	I1220 02:13:54.172089   38979 main.go:144] libmachine: setting executable bit set on /home/minitest/minikube-integration/7cd9f41b7421760cf1f1eaa8725bdb975037b06d-7160/.minikube (perms=drwxr-xr-x)
	I1220 02:13:54.172097   38979 main.go:144] libmachine: checking permissions on dir: /home/minitest/minikube-integration/7cd9f41b7421760cf1f1eaa8725bdb975037b06d-7160
	I1220 02:13:54.172106   38979 main.go:144] libmachine: setting executable bit set on /home/minitest/minikube-integration/7cd9f41b7421760cf1f1eaa8725bdb975037b06d-7160 (perms=drwxrwxr-x)
	I1220 02:13:54.172116   38979 main.go:144] libmachine: checking permissions on dir: /home/minitest/minikube-integration
	I1220 02:13:54.172127   38979 main.go:144] libmachine: setting executable bit set on /home/minitest/minikube-integration (perms=drwxrwxr-x)
	I1220 02:13:54.172134   38979 main.go:144] libmachine: checking permissions on dir: /home/minitest
	I1220 02:13:54.172143   38979 main.go:144] libmachine: setting executable bit set on /home/minitest (perms=drwxr-x--x)
	I1220 02:13:54.172153   38979 main.go:144] libmachine: checking permissions on dir: /home
	I1220 02:13:54.172162   38979 main.go:144] libmachine: skipping /home - not owner
	I1220 02:13:54.172166   38979 main.go:144] libmachine: defining domain...
	I1220 02:13:54.173523   38979 main.go:144] libmachine: defining domain using XML: 
	<domain type='kvm'>
	  <name>false-503505</name>
	  <memory unit='MiB'>3072</memory>
	  <vcpu>2</vcpu>
	  <features>
	    <acpi/>
	    <apic/>
	    <pae/>
	  </features>
	  <cpu mode='host-passthrough'>
	  </cpu>
	  <os>
	    <type>hvm</type>
	    <boot dev='cdrom'/>
	    <boot dev='hd'/>
	    <bootmenu enable='no'/>
	  </os>
	  <devices>
	    <disk type='file' device='cdrom'>
	      <source file='/home/minitest/minikube-integration/7cd9f41b7421760cf1f1eaa8725bdb975037b06d-7160/.minikube/machines/false-503505/boot2docker.iso'/>
	      <target dev='hdc' bus='scsi'/>
	      <readonly/>
	    </disk>
	    <disk type='file' device='disk'>
	      <driver name='qemu' type='raw' cache='default' io='threads' />
	      <source file='/home/minitest/minikube-integration/7cd9f41b7421760cf1f1eaa8725bdb975037b06d-7160/.minikube/machines/false-503505/false-503505.rawdisk'/>
	      <target dev='hda' bus='virtio'/>
	    </disk>
	    <interface type='network'>
	      <source network='mk-false-503505'/>
	      <model type='virtio'/>
	    </interface>
	    <interface type='network'>
	      <source network='default'/>
	      <model type='virtio'/>
	    </interface>
	    <serial type='pty'>
	      <target port='0'/>
	    </serial>
	    <console type='pty'>
	      <target type='serial' port='0'/>
	    </console>
	    <rng model='virtio'>
	      <backend model='random'>/dev/random</backend>
	    </rng>
	  </devices>
	</domain>
	
	I1220 02:13:54.178932   38979 main.go:144] libmachine: domain false-503505 has defined MAC address 52:54:00:b7:52:73 in network default
	I1220 02:13:54.179675   38979 main.go:144] libmachine: domain false-503505 has defined MAC address 52:54:00:4e:1e:41 in network mk-false-503505
	I1220 02:13:54.179696   38979 main.go:144] libmachine: starting domain...
	I1220 02:13:54.179701   38979 main.go:144] libmachine: ensuring networks are active...
	I1220 02:13:54.180774   38979 main.go:144] libmachine: Ensuring network default is active
	I1220 02:13:54.181409   38979 main.go:144] libmachine: Ensuring network mk-false-503505 is active
	I1220 02:13:54.182238   38979 main.go:144] libmachine: getting domain XML...
	I1220 02:13:54.183538   38979 main.go:144] libmachine: starting domain XML:
	<domain type='kvm'>
	  <name>false-503505</name>
	  <uuid>624dd300-6a99-4c02-9eff-8eb33e6519e9</uuid>
	  <memory unit='KiB'>3145728</memory>
	  <currentMemory unit='KiB'>3145728</currentMemory>
	  <vcpu placement='static'>2</vcpu>
	  <os>
	    <type arch='x86_64' machine='pc-i440fx-noble'>hvm</type>
	    <boot dev='cdrom'/>
	    <boot dev='hd'/>
	    <bootmenu enable='no'/>
	  </os>
	  <features>
	    <acpi/>
	    <apic/>
	    <pae/>
	  </features>
	  <cpu mode='host-passthrough' check='none' migratable='on'/>
	  <clock offset='utc'/>
	  <on_poweroff>destroy</on_poweroff>
	  <on_reboot>restart</on_reboot>
	  <on_crash>destroy</on_crash>
	  <devices>
	    <emulator>/usr/bin/qemu-system-x86_64</emulator>
	    <disk type='file' device='cdrom'>
	      <driver name='qemu' type='raw'/>
	      <source file='/home/minitest/minikube-integration/7cd9f41b7421760cf1f1eaa8725bdb975037b06d-7160/.minikube/machines/false-503505/boot2docker.iso'/>
	      <target dev='hdc' bus='scsi'/>
	      <readonly/>
	      <address type='drive' controller='0' bus='0' target='0' unit='2'/>
	    </disk>
	    <disk type='file' device='disk'>
	      <driver name='qemu' type='raw' io='threads'/>
	      <source file='/home/minitest/minikube-integration/7cd9f41b7421760cf1f1eaa8725bdb975037b06d-7160/.minikube/machines/false-503505/false-503505.rawdisk'/>
	      <target dev='hda' bus='virtio'/>
	      <address type='pci' domain='0x0000' bus='0x00' slot='0x05' function='0x0'/>
	    </disk>
	    <controller type='usb' index='0' model='piix3-uhci'>
	      <address type='pci' domain='0x0000' bus='0x00' slot='0x01' function='0x2'/>
	    </controller>
	    <controller type='pci' index='0' model='pci-root'/>
	    <controller type='scsi' index='0' model='lsilogic'>
	      <address type='pci' domain='0x0000' bus='0x00' slot='0x04' function='0x0'/>
	    </controller>
	    <interface type='network'>
	      <mac address='52:54:00:4e:1e:41'/>
	      <source network='mk-false-503505'/>
	      <model type='virtio'/>
	      <address type='pci' domain='0x0000' bus='0x00' slot='0x02' function='0x0'/>
	    </interface>
	    <interface type='network'>
	      <mac address='52:54:00:b7:52:73'/>
	      <source network='default'/>
	      <model type='virtio'/>
	      <address type='pci' domain='0x0000' bus='0x00' slot='0x03' function='0x0'/>
	    </interface>
	    <serial type='pty'>
	      <target type='isa-serial' port='0'>
	        <model name='isa-serial'/>
	      </target>
	    </serial>
	    <console type='pty'>
	      <target type='serial' port='0'/>
	    </console>
	    <input type='mouse' bus='ps2'/>
	    <input type='keyboard' bus='ps2'/>
	    <audio id='1' type='none'/>
	    <memballoon model='virtio'>
	      <address type='pci' domain='0x0000' bus='0x00' slot='0x06' function='0x0'/>
	    </memballoon>
	    <rng model='virtio'>
	      <backend model='random'>/dev/random</backend>
	      <address type='pci' domain='0x0000' bus='0x00' slot='0x07' function='0x0'/>
	    </rng>
	  </devices>
	</domain>
	
	I1220 02:13:55.365223   38979 main.go:144] libmachine: waiting for domain to start...
	I1220 02:13:55.367393   38979 main.go:144] libmachine: domain is now running
	I1220 02:13:55.367419   38979 main.go:144] libmachine: waiting for IP...
	I1220 02:13:55.368500   38979 main.go:144] libmachine: domain false-503505 has defined MAC address 52:54:00:4e:1e:41 in network mk-false-503505
	I1220 02:13:55.369502   38979 main.go:144] libmachine: no network interface addresses found for domain false-503505 (source=lease)
	I1220 02:13:55.369522   38979 main.go:144] libmachine: trying to list again with source=arp
	I1220 02:13:55.369923   38979 main.go:144] libmachine: unable to find current IP address of domain false-503505 in network mk-false-503505 (interfaces detected: [])
	I1220 02:13:55.369977   38979 retry.go:31] will retry after 247.996373ms: waiting for domain to come up
	I1220 02:13:55.619698   38979 main.go:144] libmachine: domain false-503505 has defined MAC address 52:54:00:4e:1e:41 in network mk-false-503505
	I1220 02:13:55.620501   38979 main.go:144] libmachine: no network interface addresses found for domain false-503505 (source=lease)
	I1220 02:13:55.620524   38979 main.go:144] libmachine: trying to list again with source=arp
	I1220 02:13:55.620981   38979 main.go:144] libmachine: unable to find current IP address of domain false-503505 in network mk-false-503505 (interfaces detected: [])
	I1220 02:13:55.621018   38979 retry.go:31] will retry after 253.163992ms: waiting for domain to come up
	I1220 02:13:55.875623   38979 main.go:144] libmachine: domain false-503505 has defined MAC address 52:54:00:4e:1e:41 in network mk-false-503505
	I1220 02:13:55.876522   38979 main.go:144] libmachine: no network interface addresses found for domain false-503505 (source=lease)
	I1220 02:13:55.876543   38979 main.go:144] libmachine: trying to list again with source=arp
	I1220 02:13:55.876997   38979 main.go:144] libmachine: unable to find current IP address of domain false-503505 in network mk-false-503505 (interfaces detected: [])
	I1220 02:13:55.877034   38979 retry.go:31] will retry after 322.078046ms: waiting for domain to come up
	I1220 02:13:56.200749   38979 main.go:144] libmachine: domain false-503505 has defined MAC address 52:54:00:4e:1e:41 in network mk-false-503505
	I1220 02:13:56.201573   38979 main.go:144] libmachine: no network interface addresses found for domain false-503505 (source=lease)
	I1220 02:13:56.201590   38979 main.go:144] libmachine: trying to list again with source=arp
	I1220 02:13:56.201993   38979 main.go:144] libmachine: unable to find current IP address of domain false-503505 in network mk-false-503505 (interfaces detected: [])
	I1220 02:13:56.202032   38979 retry.go:31] will retry after 398.279098ms: waiting for domain to come up
	I1220 02:13:56.601723   38979 main.go:144] libmachine: domain false-503505 has defined MAC address 52:54:00:4e:1e:41 in network mk-false-503505
	I1220 02:13:56.602519   38979 main.go:144] libmachine: no network interface addresses found for domain false-503505 (source=lease)
	I1220 02:13:56.602554   38979 main.go:144] libmachine: trying to list again with source=arp
	I1220 02:13:56.603065   38979 main.go:144] libmachine: unable to find current IP address of domain false-503505 in network mk-false-503505 (interfaces detected: [])
	I1220 02:13:56.603103   38979 retry.go:31] will retry after 668.508453ms: waiting for domain to come up
	I1220 02:13:57.272883   38979 main.go:144] libmachine: domain false-503505 has defined MAC address 52:54:00:4e:1e:41 in network mk-false-503505
	I1220 02:13:57.273735   38979 main.go:144] libmachine: no network interface addresses found for domain false-503505 (source=lease)
	I1220 02:13:57.273763   38979 main.go:144] libmachine: trying to list again with source=arp
	I1220 02:13:57.274179   38979 main.go:144] libmachine: unable to find current IP address of domain false-503505 in network mk-false-503505 (interfaces detected: [])
	I1220 02:13:57.274223   38979 retry.go:31] will retry after 936.48012ms: waiting for domain to come up
	I1220 02:13:58.212951   38979 main.go:144] libmachine: domain false-503505 has defined MAC address 52:54:00:4e:1e:41 in network mk-false-503505
	I1220 02:13:58.213934   38979 main.go:144] libmachine: no network interface addresses found for domain false-503505 (source=lease)
	I1220 02:13:58.213955   38979 main.go:144] libmachine: trying to list again with source=arp
	I1220 02:13:58.214490   38979 main.go:144] libmachine: unable to find current IP address of domain false-503505 in network mk-false-503505 (interfaces detected: [])
	I1220 02:13:58.214540   38979 retry.go:31] will retry after 1.101549544s: waiting for domain to come up
	W1220 02:13:59.093909   37762 node_ready.go:57] node "calico-503505" has "Ready":"False" status (will retry)
	I1220 02:14:00.089963   37762 node_ready.go:49] node "calico-503505" is "Ready"
	I1220 02:14:00.090003   37762 node_ready.go:38] duration metric: took 9.504754397s for node "calico-503505" to be "Ready" ...
	I1220 02:14:00.090027   37762 api_server.go:52] waiting for apiserver process to appear ...
	I1220 02:14:00.090096   37762 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1220 02:14:00.121908   37762 api_server.go:72] duration metric: took 11.479258368s to wait for apiserver process to appear ...
	I1220 02:14:00.121945   37762 api_server.go:88] waiting for apiserver healthz status ...
	I1220 02:14:00.121968   37762 api_server.go:253] Checking apiserver healthz at https://192.168.39.226:8443/healthz ...
	I1220 02:14:00.133024   37762 api_server.go:279] https://192.168.39.226:8443/healthz returned 200:
	ok
	I1220 02:14:00.134499   37762 api_server.go:141] control plane version: v1.34.3
	I1220 02:14:00.134533   37762 api_server.go:131] duration metric: took 12.580039ms to wait for apiserver health ...
	I1220 02:14:00.134544   37762 system_pods.go:43] waiting for kube-system pods to appear ...
	I1220 02:14:00.143085   37762 system_pods.go:59] 9 kube-system pods found
	I1220 02:14:00.143143   37762 system_pods.go:61] "calico-kube-controllers-5c676f698c-5plhl" [8be42fe3-d58c-4bbf-9c51-8b689e28f671] Pending / Ready:ContainersNotReady (containers with unready status: [calico-kube-controllers]) / ContainersReady:ContainersNotReady (containers with unready status: [calico-kube-controllers])
	I1220 02:14:00.143160   37762 system_pods.go:61] "calico-node-d2xvk" [ebf1dd8e-ce6b-4cf7-a858-755e70edd320] Pending / Initialized:ContainersNotInitialized (containers with incomplete status: [ebpf-bootstrap]) / Ready:ContainersNotReady (containers with unready status: [calico-node]) / ContainersReady:ContainersNotReady (containers with unready status: [calico-node])
	I1220 02:14:00.143171   37762 system_pods.go:61] "coredns-66bc5c9577-hd2kg" [6f327eef-5be3-4358-8bf0-be3e0e9a13f1] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1220 02:14:00.143177   37762 system_pods.go:61] "etcd-calico-503505" [7fed5e30-e972-43f8-9ef7-7261b7216d7c] Running
	I1220 02:14:00.143183   37762 system_pods.go:61] "kube-apiserver-calico-503505" [b81cb7dd-efa9-428d-b155-4c8d4fcb5566] Running
	I1220 02:14:00.143188   37762 system_pods.go:61] "kube-controller-manager-calico-503505" [4a7a0d88-4103-46e7-8090-72b0e9d91c39] Running
	I1220 02:14:00.143194   37762 system_pods.go:61] "kube-proxy-gzr82" [6a0b0ea5-98c0-4762-9423-1c10dee4576e] Running
	I1220 02:14:00.143219   37762 system_pods.go:61] "kube-scheduler-calico-503505" [ecad390b-bd72-4f11-82fd-6544060a23c7] Running
	I1220 02:14:00.143233   37762 system_pods.go:61] "storage-provisioner" [742d00f3-5d72-488a-afa1-1fcd40398cf6] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I1220 02:14:00.143243   37762 system_pods.go:74] duration metric: took 8.690731ms to wait for pod list to return data ...
	I1220 02:14:00.143254   37762 default_sa.go:34] waiting for default service account to be created ...
	I1220 02:14:00.147300   37762 default_sa.go:45] found service account: "default"
	I1220 02:14:00.147335   37762 default_sa.go:55] duration metric: took 4.072144ms for default service account to be created ...
	I1220 02:14:00.147349   37762 system_pods.go:116] waiting for k8s-apps to be running ...
	I1220 02:14:00.153827   37762 system_pods.go:86] 9 kube-system pods found
	I1220 02:14:00.153869   37762 system_pods.go:89] "calico-kube-controllers-5c676f698c-5plhl" [8be42fe3-d58c-4bbf-9c51-8b689e28f671] Pending / Ready:ContainersNotReady (containers with unready status: [calico-kube-controllers]) / ContainersReady:ContainersNotReady (containers with unready status: [calico-kube-controllers])
	I1220 02:14:00.153882   37762 system_pods.go:89] "calico-node-d2xvk" [ebf1dd8e-ce6b-4cf7-a858-755e70edd320] Pending / Initialized:ContainersNotInitialized (containers with incomplete status: [ebpf-bootstrap]) / Ready:ContainersNotReady (containers with unready status: [calico-node]) / ContainersReady:ContainersNotReady (containers with unready status: [calico-node])
	I1220 02:14:00.153892   37762 system_pods.go:89] "coredns-66bc5c9577-hd2kg" [6f327eef-5be3-4358-8bf0-be3e0e9a13f1] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1220 02:14:00.153900   37762 system_pods.go:89] "etcd-calico-503505" [7fed5e30-e972-43f8-9ef7-7261b7216d7c] Running
	I1220 02:14:00.153907   37762 system_pods.go:89] "kube-apiserver-calico-503505" [b81cb7dd-efa9-428d-b155-4c8d4fcb5566] Running
	I1220 02:14:00.153911   37762 system_pods.go:89] "kube-controller-manager-calico-503505" [4a7a0d88-4103-46e7-8090-72b0e9d91c39] Running
	I1220 02:14:00.153917   37762 system_pods.go:89] "kube-proxy-gzr82" [6a0b0ea5-98c0-4762-9423-1c10dee4576e] Running
	I1220 02:14:00.153922   37762 system_pods.go:89] "kube-scheduler-calico-503505" [ecad390b-bd72-4f11-82fd-6544060a23c7] Running
	I1220 02:14:00.153930   37762 system_pods.go:89] "storage-provisioner" [742d00f3-5d72-488a-afa1-1fcd40398cf6] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I1220 02:14:00.153953   37762 retry.go:31] will retry after 191.011989ms: missing components: kube-dns
	I1220 02:14:00.353588   37762 system_pods.go:86] 9 kube-system pods found
	I1220 02:14:00.353638   37762 system_pods.go:89] "calico-kube-controllers-5c676f698c-5plhl" [8be42fe3-d58c-4bbf-9c51-8b689e28f671] Pending / Ready:ContainersNotReady (containers with unready status: [calico-kube-controllers]) / ContainersReady:ContainersNotReady (containers with unready status: [calico-kube-controllers])
	I1220 02:14:00.353652   37762 system_pods.go:89] "calico-node-d2xvk" [ebf1dd8e-ce6b-4cf7-a858-755e70edd320] Pending / Initialized:ContainersNotInitialized (containers with incomplete status: [ebpf-bootstrap]) / Ready:ContainersNotReady (containers with unready status: [calico-node]) / ContainersReady:ContainersNotReady (containers with unready status: [calico-node])
	I1220 02:14:00.353665   37762 system_pods.go:89] "coredns-66bc5c9577-hd2kg" [6f327eef-5be3-4358-8bf0-be3e0e9a13f1] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1220 02:14:00.353673   37762 system_pods.go:89] "etcd-calico-503505" [7fed5e30-e972-43f8-9ef7-7261b7216d7c] Running
	I1220 02:14:00.353681   37762 system_pods.go:89] "kube-apiserver-calico-503505" [b81cb7dd-efa9-428d-b155-4c8d4fcb5566] Running
	I1220 02:14:00.353688   37762 system_pods.go:89] "kube-controller-manager-calico-503505" [4a7a0d88-4103-46e7-8090-72b0e9d91c39] Running
	I1220 02:14:00.353696   37762 system_pods.go:89] "kube-proxy-gzr82" [6a0b0ea5-98c0-4762-9423-1c10dee4576e] Running
	I1220 02:14:00.353702   37762 system_pods.go:89] "kube-scheduler-calico-503505" [ecad390b-bd72-4f11-82fd-6544060a23c7] Running
	I1220 02:14:00.353710   37762 system_pods.go:89] "storage-provisioner" [742d00f3-5d72-488a-afa1-1fcd40398cf6] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I1220 02:14:00.353731   37762 retry.go:31] will retry after 332.593015ms: missing components: kube-dns
	I1220 02:14:00.697960   37762 system_pods.go:86] 9 kube-system pods found
	I1220 02:14:00.698016   37762 system_pods.go:89] "calico-kube-controllers-5c676f698c-5plhl" [8be42fe3-d58c-4bbf-9c51-8b689e28f671] Pending / Ready:ContainersNotReady (containers with unready status: [calico-kube-controllers]) / ContainersReady:ContainersNotReady (containers with unready status: [calico-kube-controllers])
	I1220 02:14:00.698032   37762 system_pods.go:89] "calico-node-d2xvk" [ebf1dd8e-ce6b-4cf7-a858-755e70edd320] Pending / Initialized:ContainersNotInitialized (containers with incomplete status: [ebpf-bootstrap]) / Ready:ContainersNotReady (containers with unready status: [calico-node]) / ContainersReady:ContainersNotReady (containers with unready status: [calico-node])
	I1220 02:14:00.698045   37762 system_pods.go:89] "coredns-66bc5c9577-hd2kg" [6f327eef-5be3-4358-8bf0-be3e0e9a13f1] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1220 02:14:00.698051   37762 system_pods.go:89] "etcd-calico-503505" [7fed5e30-e972-43f8-9ef7-7261b7216d7c] Running
	I1220 02:14:00.698057   37762 system_pods.go:89] "kube-apiserver-calico-503505" [b81cb7dd-efa9-428d-b155-4c8d4fcb5566] Running
	I1220 02:14:00.698062   37762 system_pods.go:89] "kube-controller-manager-calico-503505" [4a7a0d88-4103-46e7-8090-72b0e9d91c39] Running
	I1220 02:14:00.698068   37762 system_pods.go:89] "kube-proxy-gzr82" [6a0b0ea5-98c0-4762-9423-1c10dee4576e] Running
	I1220 02:14:00.698073   37762 system_pods.go:89] "kube-scheduler-calico-503505" [ecad390b-bd72-4f11-82fd-6544060a23c7] Running
	I1220 02:14:00.698080   37762 system_pods.go:89] "storage-provisioner" [742d00f3-5d72-488a-afa1-1fcd40398cf6] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I1220 02:14:00.698098   37762 retry.go:31] will retry after 441.450882ms: missing components: kube-dns
	I1220 02:14:01.147620   37762 system_pods.go:86] 9 kube-system pods found
	I1220 02:14:01.147663   37762 system_pods.go:89] "calico-kube-controllers-5c676f698c-5plhl" [8be42fe3-d58c-4bbf-9c51-8b689e28f671] Pending / Ready:ContainersNotReady (containers with unready status: [calico-kube-controllers]) / ContainersReady:ContainersNotReady (containers with unready status: [calico-kube-controllers])
	I1220 02:14:01.147675   37762 system_pods.go:89] "calico-node-d2xvk" [ebf1dd8e-ce6b-4cf7-a858-755e70edd320] Pending / Initialized:ContainersNotInitialized (containers with incomplete status: [ebpf-bootstrap]) / Ready:ContainersNotReady (containers with unready status: [calico-node]) / ContainersReady:ContainersNotReady (containers with unready status: [calico-node])
	I1220 02:14:01.147685   37762 system_pods.go:89] "coredns-66bc5c9577-hd2kg" [6f327eef-5be3-4358-8bf0-be3e0e9a13f1] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1220 02:14:01.147690   37762 system_pods.go:89] "etcd-calico-503505" [7fed5e30-e972-43f8-9ef7-7261b7216d7c] Running
	I1220 02:14:01.147697   37762 system_pods.go:89] "kube-apiserver-calico-503505" [b81cb7dd-efa9-428d-b155-4c8d4fcb5566] Running
	I1220 02:14:01.147702   37762 system_pods.go:89] "kube-controller-manager-calico-503505" [4a7a0d88-4103-46e7-8090-72b0e9d91c39] Running
	I1220 02:14:01.147707   37762 system_pods.go:89] "kube-proxy-gzr82" [6a0b0ea5-98c0-4762-9423-1c10dee4576e] Running
	I1220 02:14:01.147711   37762 system_pods.go:89] "kube-scheduler-calico-503505" [ecad390b-bd72-4f11-82fd-6544060a23c7] Running
	I1220 02:14:01.147718   37762 system_pods.go:89] "storage-provisioner" [742d00f3-5d72-488a-afa1-1fcd40398cf6] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I1220 02:14:01.147737   37762 retry.go:31] will retry after 398.996064ms: missing components: kube-dns
	I1220 02:14:01.555710   37762 system_pods.go:86] 9 kube-system pods found
	I1220 02:14:01.555752   37762 system_pods.go:89] "calico-kube-controllers-5c676f698c-5plhl" [8be42fe3-d58c-4bbf-9c51-8b689e28f671] Pending / Ready:ContainersNotReady (containers with unready status: [calico-kube-controllers]) / ContainersReady:ContainersNotReady (containers with unready status: [calico-kube-controllers])
	I1220 02:14:01.555764   37762 system_pods.go:89] "calico-node-d2xvk" [ebf1dd8e-ce6b-4cf7-a858-755e70edd320] Pending / Initialized:ContainersNotInitialized (containers with incomplete status: [ebpf-bootstrap]) / Ready:ContainersNotReady (containers with unready status: [calico-node]) / ContainersReady:ContainersNotReady (containers with unready status: [calico-node])
	I1220 02:14:01.555774   37762 system_pods.go:89] "coredns-66bc5c9577-hd2kg" [6f327eef-5be3-4358-8bf0-be3e0e9a13f1] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1220 02:14:01.555779   37762 system_pods.go:89] "etcd-calico-503505" [7fed5e30-e972-43f8-9ef7-7261b7216d7c] Running
	I1220 02:14:01.555786   37762 system_pods.go:89] "kube-apiserver-calico-503505" [b81cb7dd-efa9-428d-b155-4c8d4fcb5566] Running
	I1220 02:14:01.555791   37762 system_pods.go:89] "kube-controller-manager-calico-503505" [4a7a0d88-4103-46e7-8090-72b0e9d91c39] Running
	I1220 02:14:01.555797   37762 system_pods.go:89] "kube-proxy-gzr82" [6a0b0ea5-98c0-4762-9423-1c10dee4576e] Running
	I1220 02:14:01.555802   37762 system_pods.go:89] "kube-scheduler-calico-503505" [ecad390b-bd72-4f11-82fd-6544060a23c7] Running
	I1220 02:14:01.555813   37762 system_pods.go:89] "storage-provisioner" [742d00f3-5d72-488a-afa1-1fcd40398cf6] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I1220 02:14:01.555831   37762 retry.go:31] will retry after 742.519055ms: missing components: kube-dns
	I1220 02:14:02.306002   37762 system_pods.go:86] 9 kube-system pods found
	I1220 02:14:02.306049   37762 system_pods.go:89] "calico-kube-controllers-5c676f698c-5plhl" [8be42fe3-d58c-4bbf-9c51-8b689e28f671] Pending / Ready:ContainersNotReady (containers with unready status: [calico-kube-controllers]) / ContainersReady:ContainersNotReady (containers with unready status: [calico-kube-controllers])
	I1220 02:14:02.306068   37762 system_pods.go:89] "calico-node-d2xvk" [ebf1dd8e-ce6b-4cf7-a858-755e70edd320] Pending / Initialized:ContainersNotInitialized (containers with incomplete status: [ebpf-bootstrap]) / Ready:ContainersNotReady (containers with unready status: [calico-node]) / ContainersReady:ContainersNotReady (containers with unready status: [calico-node])
	I1220 02:14:02.306080   37762 system_pods.go:89] "coredns-66bc5c9577-hd2kg" [6f327eef-5be3-4358-8bf0-be3e0e9a13f1] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1220 02:14:02.306088   37762 system_pods.go:89] "etcd-calico-503505" [7fed5e30-e972-43f8-9ef7-7261b7216d7c] Running
	I1220 02:14:02.306097   37762 system_pods.go:89] "kube-apiserver-calico-503505" [b81cb7dd-efa9-428d-b155-4c8d4fcb5566] Running
	I1220 02:14:02.306102   37762 system_pods.go:89] "kube-controller-manager-calico-503505" [4a7a0d88-4103-46e7-8090-72b0e9d91c39] Running
	I1220 02:14:02.306109   37762 system_pods.go:89] "kube-proxy-gzr82" [6a0b0ea5-98c0-4762-9423-1c10dee4576e] Running
	I1220 02:14:02.306114   37762 system_pods.go:89] "kube-scheduler-calico-503505" [ecad390b-bd72-4f11-82fd-6544060a23c7] Running
	I1220 02:14:02.306119   37762 system_pods.go:89] "storage-provisioner" [742d00f3-5d72-488a-afa1-1fcd40398cf6] Running
	I1220 02:14:02.306141   37762 retry.go:31] will retry after 687.588334ms: missing components: kube-dns
	I1220 02:14:01.475480   37878 kubeadm.go:319] [control-plane-check] kube-controller-manager is healthy after 4.023883563s
	I1220 02:13:59.318088   38979 main.go:144] libmachine: domain false-503505 has defined MAC address 52:54:00:4e:1e:41 in network mk-false-503505
	I1220 02:13:59.319169   38979 main.go:144] libmachine: no network interface addresses found for domain false-503505 (source=lease)
	I1220 02:13:59.319195   38979 main.go:144] libmachine: trying to list again with source=arp
	I1220 02:13:59.319707   38979 main.go:144] libmachine: unable to find current IP address of domain false-503505 in network mk-false-503505 (interfaces detected: [])
	I1220 02:13:59.319759   38979 retry.go:31] will retry after 1.133836082s: waiting for domain to come up
	I1220 02:14:00.455752   38979 main.go:144] libmachine: domain false-503505 has defined MAC address 52:54:00:4e:1e:41 in network mk-false-503505
	I1220 02:14:00.457000   38979 main.go:144] libmachine: no network interface addresses found for domain false-503505 (source=lease)
	I1220 02:14:00.457032   38979 main.go:144] libmachine: trying to list again with source=arp
	I1220 02:14:00.457642   38979 main.go:144] libmachine: unable to find current IP address of domain false-503505 in network mk-false-503505 (interfaces detected: [])
	I1220 02:14:00.457696   38979 retry.go:31] will retry after 1.689205474s: waiting for domain to come up
	I1220 02:14:02.149657   38979 main.go:144] libmachine: domain false-503505 has defined MAC address 52:54:00:4e:1e:41 in network mk-false-503505
	I1220 02:14:02.150579   38979 main.go:144] libmachine: no network interface addresses found for domain false-503505 (source=lease)
	I1220 02:14:02.150669   38979 main.go:144] libmachine: trying to list again with source=arp
	I1220 02:14:02.151167   38979 main.go:144] libmachine: unable to find current IP address of domain false-503505 in network mk-false-503505 (interfaces detected: [])
	I1220 02:14:02.151218   38979 retry.go:31] will retry after 1.402452731s: waiting for domain to come up
	I1220 02:14:03.555309   38979 main.go:144] libmachine: domain false-503505 has defined MAC address 52:54:00:4e:1e:41 in network mk-false-503505
	I1220 02:14:03.556319   38979 main.go:144] libmachine: no network interface addresses found for domain false-503505 (source=lease)
	I1220 02:14:03.556389   38979 main.go:144] libmachine: trying to list again with source=arp
	I1220 02:14:03.556908   38979 main.go:144] libmachine: unable to find current IP address of domain false-503505 in network mk-false-503505 (interfaces detected: [])
	I1220 02:14:03.556948   38979 retry.go:31] will retry after 2.79303956s: waiting for domain to come up
	I1220 02:14:03.304845   37878 kubeadm.go:319] [control-plane-check] kube-scheduler is healthy after 5.854389668s
	I1220 02:14:04.452000   37878 kubeadm.go:319] [control-plane-check] kube-apiserver is healthy after 7.001670897s
	I1220 02:14:04.482681   37878 kubeadm.go:319] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I1220 02:14:04.509128   37878 kubeadm.go:319] [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I1220 02:14:04.533960   37878 kubeadm.go:319] [upload-certs] Skipping phase. Please see --upload-certs
	I1220 02:14:04.534255   37878 kubeadm.go:319] [mark-control-plane] Marking the node custom-flannel-503505 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I1220 02:14:04.549617   37878 kubeadm.go:319] [bootstrap-token] Using token: 5feew1.aaci0na7tzxpkq74
	I1220 02:14:04.551043   37878 out.go:252]   - Configuring RBAC rules ...
	I1220 02:14:04.551218   37878 kubeadm.go:319] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I1220 02:14:04.561847   37878 kubeadm.go:319] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I1220 02:14:04.591000   37878 kubeadm.go:319] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I1220 02:14:04.597908   37878 kubeadm.go:319] [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I1220 02:14:04.606680   37878 kubeadm.go:319] [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I1220 02:14:04.614933   37878 kubeadm.go:319] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I1220 02:14:04.862442   37878 kubeadm.go:319] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I1220 02:14:05.356740   37878 kubeadm.go:319] [addons] Applied essential addon: CoreDNS
	I1220 02:14:05.862025   37878 kubeadm.go:319] [addons] Applied essential addon: kube-proxy
	I1220 02:14:05.865061   37878 kubeadm.go:319] 
	I1220 02:14:05.865156   37878 kubeadm.go:319] Your Kubernetes control-plane has initialized successfully!
	I1220 02:14:05.865168   37878 kubeadm.go:319] 
	I1220 02:14:05.865282   37878 kubeadm.go:319] To start using your cluster, you need to run the following as a regular user:
	I1220 02:14:05.865294   37878 kubeadm.go:319] 
	I1220 02:14:05.865359   37878 kubeadm.go:319]   mkdir -p $HOME/.kube
	I1220 02:14:05.865464   37878 kubeadm.go:319]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I1220 02:14:05.865569   37878 kubeadm.go:319]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I1220 02:14:05.865593   37878 kubeadm.go:319] 
	I1220 02:14:05.865675   37878 kubeadm.go:319] Alternatively, if you are the root user, you can run:
	I1220 02:14:05.865685   37878 kubeadm.go:319] 
	I1220 02:14:05.865781   37878 kubeadm.go:319]   export KUBECONFIG=/etc/kubernetes/admin.conf
	I1220 02:14:05.865795   37878 kubeadm.go:319] 
	I1220 02:14:05.865876   37878 kubeadm.go:319] You should now deploy a pod network to the cluster.
	I1220 02:14:05.865983   37878 kubeadm.go:319] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I1220 02:14:05.866079   37878 kubeadm.go:319]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I1220 02:14:05.866085   37878 kubeadm.go:319] 
	I1220 02:14:05.866221   37878 kubeadm.go:319] You can now join any number of control-plane nodes by copying certificate authorities
	I1220 02:14:05.866332   37878 kubeadm.go:319] and service account keys on each node and then running the following as root:
	I1220 02:14:05.866337   37878 kubeadm.go:319] 
	I1220 02:14:05.866459   37878 kubeadm.go:319]   kubeadm join control-plane.minikube.internal:8443 --token 5feew1.aaci0na7tzxpkq74 \
	I1220 02:14:05.866573   37878 kubeadm.go:319] 	--discovery-token-ca-cert-hash sha256:34b132c11c5a062e0480b441f2caac7fcba372b596da4b2c80fd8c00c74704a7 \
	I1220 02:14:05.866595   37878 kubeadm.go:319] 	--control-plane 
	I1220 02:14:05.866599   37878 kubeadm.go:319] 
	I1220 02:14:05.866684   37878 kubeadm.go:319] Then you can join any number of worker nodes by running the following on each as root:
	I1220 02:14:05.866688   37878 kubeadm.go:319] 
	I1220 02:14:05.866779   37878 kubeadm.go:319] kubeadm join control-plane.minikube.internal:8443 --token 5feew1.aaci0na7tzxpkq74 \
	I1220 02:14:05.866902   37878 kubeadm.go:319] 	--discovery-token-ca-cert-hash sha256:34b132c11c5a062e0480b441f2caac7fcba372b596da4b2c80fd8c00c74704a7 
	I1220 02:14:05.869888   37878 kubeadm.go:319] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I1220 02:14:05.869959   37878 cni.go:84] Creating CNI manager for "testdata/kube-flannel.yaml"
	I1220 02:14:05.871868   37878 out.go:179] * Configuring testdata/kube-flannel.yaml (Container Networking Interface) ...
	I1220 02:14:03.004292   37762 system_pods.go:86] 9 kube-system pods found
	I1220 02:14:03.004339   37762 system_pods.go:89] "calico-kube-controllers-5c676f698c-5plhl" [8be42fe3-d58c-4bbf-9c51-8b689e28f671] Pending / Ready:ContainersNotReady (containers with unready status: [calico-kube-controllers]) / ContainersReady:ContainersNotReady (containers with unready status: [calico-kube-controllers])
	I1220 02:14:03.004352   37762 system_pods.go:89] "calico-node-d2xvk" [ebf1dd8e-ce6b-4cf7-a858-755e70edd320] Pending / Initialized:ContainersNotInitialized (containers with incomplete status: [ebpf-bootstrap]) / Ready:ContainersNotReady (containers with unready status: [calico-node]) / ContainersReady:ContainersNotReady (containers with unready status: [calico-node])
	I1220 02:14:03.004361   37762 system_pods.go:89] "coredns-66bc5c9577-hd2kg" [6f327eef-5be3-4358-8bf0-be3e0e9a13f1] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1220 02:14:03.004367   37762 system_pods.go:89] "etcd-calico-503505" [7fed5e30-e972-43f8-9ef7-7261b7216d7c] Running
	I1220 02:14:03.004374   37762 system_pods.go:89] "kube-apiserver-calico-503505" [b81cb7dd-efa9-428d-b155-4c8d4fcb5566] Running
	I1220 02:14:03.004379   37762 system_pods.go:89] "kube-controller-manager-calico-503505" [4a7a0d88-4103-46e7-8090-72b0e9d91c39] Running
	I1220 02:14:03.004384   37762 system_pods.go:89] "kube-proxy-gzr82" [6a0b0ea5-98c0-4762-9423-1c10dee4576e] Running
	I1220 02:14:03.004389   37762 system_pods.go:89] "kube-scheduler-calico-503505" [ecad390b-bd72-4f11-82fd-6544060a23c7] Running
	I1220 02:14:03.004394   37762 system_pods.go:89] "storage-provisioner" [742d00f3-5d72-488a-afa1-1fcd40398cf6] Running
	I1220 02:14:03.004412   37762 retry.go:31] will retry after 732.081748ms: missing components: kube-dns
	I1220 02:14:03.744119   37762 system_pods.go:86] 9 kube-system pods found
	I1220 02:14:03.744161   37762 system_pods.go:89] "calico-kube-controllers-5c676f698c-5plhl" [8be42fe3-d58c-4bbf-9c51-8b689e28f671] Pending / Ready:ContainersNotReady (containers with unready status: [calico-kube-controllers]) / ContainersReady:ContainersNotReady (containers with unready status: [calico-kube-controllers])
	I1220 02:14:03.744175   37762 system_pods.go:89] "calico-node-d2xvk" [ebf1dd8e-ce6b-4cf7-a858-755e70edd320] Pending / Initialized:ContainersNotInitialized (containers with incomplete status: [ebpf-bootstrap]) / Ready:ContainersNotReady (containers with unready status: [calico-node]) / ContainersReady:ContainersNotReady (containers with unready status: [calico-node])
	I1220 02:14:03.744185   37762 system_pods.go:89] "coredns-66bc5c9577-hd2kg" [6f327eef-5be3-4358-8bf0-be3e0e9a13f1] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1220 02:14:03.744191   37762 system_pods.go:89] "etcd-calico-503505" [7fed5e30-e972-43f8-9ef7-7261b7216d7c] Running
	I1220 02:14:03.744214   37762 system_pods.go:89] "kube-apiserver-calico-503505" [b81cb7dd-efa9-428d-b155-4c8d4fcb5566] Running
	I1220 02:14:03.744221   37762 system_pods.go:89] "kube-controller-manager-calico-503505" [4a7a0d88-4103-46e7-8090-72b0e9d91c39] Running
	I1220 02:14:03.744227   37762 system_pods.go:89] "kube-proxy-gzr82" [6a0b0ea5-98c0-4762-9423-1c10dee4576e] Running
	I1220 02:14:03.744232   37762 system_pods.go:89] "kube-scheduler-calico-503505" [ecad390b-bd72-4f11-82fd-6544060a23c7] Running
	I1220 02:14:03.744241   37762 system_pods.go:89] "storage-provisioner" [742d00f3-5d72-488a-afa1-1fcd40398cf6] Running
	I1220 02:14:03.744273   37762 retry.go:31] will retry after 1.276813322s: missing components: kube-dns
	I1220 02:14:05.030079   37762 system_pods.go:86] 9 kube-system pods found
	I1220 02:14:05.030129   37762 system_pods.go:89] "calico-kube-controllers-5c676f698c-5plhl" [8be42fe3-d58c-4bbf-9c51-8b689e28f671] Pending / Ready:ContainersNotReady (containers with unready status: [calico-kube-controllers]) / ContainersReady:ContainersNotReady (containers with unready status: [calico-kube-controllers])
	I1220 02:14:05.030146   37762 system_pods.go:89] "calico-node-d2xvk" [ebf1dd8e-ce6b-4cf7-a858-755e70edd320] Pending / Initialized:ContainersNotInitialized (containers with incomplete status: [ebpf-bootstrap]) / Ready:ContainersNotReady (containers with unready status: [calico-node]) / ContainersReady:ContainersNotReady (containers with unready status: [calico-node])
	I1220 02:14:05.030161   37762 system_pods.go:89] "coredns-66bc5c9577-hd2kg" [6f327eef-5be3-4358-8bf0-be3e0e9a13f1] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1220 02:14:05.030168   37762 system_pods.go:89] "etcd-calico-503505" [7fed5e30-e972-43f8-9ef7-7261b7216d7c] Running
	I1220 02:14:05.030187   37762 system_pods.go:89] "kube-apiserver-calico-503505" [b81cb7dd-efa9-428d-b155-4c8d4fcb5566] Running
	I1220 02:14:05.030194   37762 system_pods.go:89] "kube-controller-manager-calico-503505" [4a7a0d88-4103-46e7-8090-72b0e9d91c39] Running
	I1220 02:14:05.030221   37762 system_pods.go:89] "kube-proxy-gzr82" [6a0b0ea5-98c0-4762-9423-1c10dee4576e] Running
	I1220 02:14:05.030229   37762 system_pods.go:89] "kube-scheduler-calico-503505" [ecad390b-bd72-4f11-82fd-6544060a23c7] Running
	I1220 02:14:05.030235   37762 system_pods.go:89] "storage-provisioner" [742d00f3-5d72-488a-afa1-1fcd40398cf6] Running
	I1220 02:14:05.030257   37762 retry.go:31] will retry after 1.238453929s: missing components: kube-dns
	I1220 02:14:06.275974   37762 system_pods.go:86] 9 kube-system pods found
	I1220 02:14:06.276021   37762 system_pods.go:89] "calico-kube-controllers-5c676f698c-5plhl" [8be42fe3-d58c-4bbf-9c51-8b689e28f671] Pending / Ready:ContainersNotReady (containers with unready status: [calico-kube-controllers]) / ContainersReady:ContainersNotReady (containers with unready status: [calico-kube-controllers])
	I1220 02:14:06.276033   37762 system_pods.go:89] "calico-node-d2xvk" [ebf1dd8e-ce6b-4cf7-a858-755e70edd320] Pending / Initialized:ContainersNotInitialized (containers with incomplete status: [ebpf-bootstrap]) / Ready:ContainersNotReady (containers with unready status: [calico-node]) / ContainersReady:ContainersNotReady (containers with unready status: [calico-node])
	I1220 02:14:06.276049   37762 system_pods.go:89] "coredns-66bc5c9577-hd2kg" [6f327eef-5be3-4358-8bf0-be3e0e9a13f1] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1220 02:14:06.276055   37762 system_pods.go:89] "etcd-calico-503505" [7fed5e30-e972-43f8-9ef7-7261b7216d7c] Running
	I1220 02:14:06.276061   37762 system_pods.go:89] "kube-apiserver-calico-503505" [b81cb7dd-efa9-428d-b155-4c8d4fcb5566] Running
	I1220 02:14:06.276066   37762 system_pods.go:89] "kube-controller-manager-calico-503505" [4a7a0d88-4103-46e7-8090-72b0e9d91c39] Running
	I1220 02:14:06.276077   37762 system_pods.go:89] "kube-proxy-gzr82" [6a0b0ea5-98c0-4762-9423-1c10dee4576e] Running
	I1220 02:14:06.276083   37762 system_pods.go:89] "kube-scheduler-calico-503505" [ecad390b-bd72-4f11-82fd-6544060a23c7] Running
	I1220 02:14:06.276087   37762 system_pods.go:89] "storage-provisioner" [742d00f3-5d72-488a-afa1-1fcd40398cf6] Running
	I1220 02:14:06.276106   37762 retry.go:31] will retry after 1.908248969s: missing components: kube-dns
	I1220 02:14:05.873406   37878 cni.go:182] applying CNI manifest using /var/lib/minikube/binaries/v1.34.3/kubectl ...
	I1220 02:14:05.873469   37878 ssh_runner.go:195] Run: stat -c "%s %y" /var/tmp/minikube/cni.yaml
	I1220 02:14:05.881393   37878 ssh_runner.go:352] existence check for /var/tmp/minikube/cni.yaml: stat -c "%s %y" /var/tmp/minikube/cni.yaml: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/tmp/minikube/cni.yaml': No such file or directory
	I1220 02:14:05.881431   37878 ssh_runner.go:362] scp testdata/kube-flannel.yaml --> /var/tmp/minikube/cni.yaml (4578 bytes)
	I1220 02:14:05.936780   37878 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.3/kubectl apply --kubeconfig=/var/lib/minikube/kubeconfig -f /var/tmp/minikube/cni.yaml
	I1220 02:14:06.396862   37878 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.3/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I1220 02:14:06.396880   37878 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I1220 02:14:06.396862   37878 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.3/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes custom-flannel-503505 minikube.k8s.io/updated_at=2025_12_20T02_14_06_0700 minikube.k8s.io/version=v1.37.0 minikube.k8s.io/commit=7cd9f41b7421760cf1f1eaa8725bdb975037b06d minikube.k8s.io/name=custom-flannel-503505 minikube.k8s.io/primary=true
	I1220 02:14:06.630781   37878 ops.go:34] apiserver oom_adj: -16
	I1220 02:14:06.630941   37878 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1220 02:14:07.131072   37878 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1220 02:14:07.631526   37878 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1220 02:14:06.351650   38979 main.go:144] libmachine: domain false-503505 has defined MAC address 52:54:00:4e:1e:41 in network mk-false-503505
	I1220 02:14:06.352735   38979 main.go:144] libmachine: no network interface addresses found for domain false-503505 (source=lease)
	I1220 02:14:06.352774   38979 main.go:144] libmachine: trying to list again with source=arp
	I1220 02:14:06.353319   38979 main.go:144] libmachine: unable to find current IP address of domain false-503505 in network mk-false-503505 (interfaces detected: [])
	I1220 02:14:06.353358   38979 retry.go:31] will retry after 3.225841356s: waiting for domain to come up
	I1220 02:14:08.131099   37878 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1220 02:14:08.631429   37878 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1220 02:14:09.131400   37878 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1220 02:14:09.631470   37878 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1220 02:14:10.131821   37878 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1220 02:14:10.631264   37878 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1220 02:14:10.765536   37878 kubeadm.go:1114] duration metric: took 4.368721457s to wait for elevateKubeSystemPrivileges
	I1220 02:14:10.765599   37878 kubeadm.go:403] duration metric: took 18.502801612s to StartCluster
	I1220 02:14:10.765625   37878 settings.go:142] acquiring lock: {Name:mk57472848b32b0320e862b3ad8a64076ed3d76e Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1220 02:14:10.765731   37878 settings.go:150] Updating kubeconfig:  /home/minitest/minikube-integration/7cd9f41b7421760cf1f1eaa8725bdb975037b06d-7160/kubeconfig
	I1220 02:14:10.767410   37878 lock.go:35] WriteFile acquiring /home/minitest/minikube-integration/7cd9f41b7421760cf1f1eaa8725bdb975037b06d-7160/kubeconfig: {Name:mk7e6532318eb55e3c1811a528040bd41c46d8c7 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1220 02:14:10.767716   37878 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.3/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml"
	I1220 02:14:10.767786   37878 addons.go:527] enable addons start: toEnable=map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubetail:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I1220 02:14:10.767867   37878 addons.go:70] Setting storage-provisioner=true in profile "custom-flannel-503505"
	I1220 02:14:10.767885   37878 addons.go:239] Setting addon storage-provisioner=true in "custom-flannel-503505"
	I1220 02:14:10.767747   37878 start.go:236] Will wait 15m0s for node &{Name: IP:192.168.72.110 Port:8443 KubernetesVersion:v1.34.3 ContainerRuntime:docker ControlPlane:true Worker:true}
	I1220 02:14:10.767912   37878 host.go:66] Checking if "custom-flannel-503505" exists ...
	I1220 02:14:10.767936   37878 config.go:182] Loaded profile config "custom-flannel-503505": Driver=kvm2, ContainerRuntime=docker, KubernetesVersion=v1.34.3
	I1220 02:14:10.767992   37878 addons.go:70] Setting default-storageclass=true in profile "custom-flannel-503505"
	I1220 02:14:10.768006   37878 addons_storage_classes.go:34] enableOrDisableStorageClasses default-storageclass=true on "custom-flannel-503505"
	I1220 02:14:10.769347   37878 out.go:179] * Verifying Kubernetes components...
	I1220 02:14:10.770891   37878 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1220 02:14:10.772643   37878 addons.go:239] Setting addon default-storageclass=true in "custom-flannel-503505"
	I1220 02:14:10.772686   37878 host.go:66] Checking if "custom-flannel-503505" exists ...
	I1220 02:14:10.772827   37878 out.go:179]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I1220 02:14:10.774271   37878 addons.go:436] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I1220 02:14:10.774291   37878 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I1220 02:14:10.775118   37878 addons.go:436] installing /etc/kubernetes/addons/storageclass.yaml
	I1220 02:14:10.775173   37878 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I1220 02:14:10.778715   37878 main.go:144] libmachine: domain custom-flannel-503505 has defined MAC address 52:54:00:31:8f:50 in network mk-custom-flannel-503505
	I1220 02:14:10.779148   37878 main.go:144] libmachine: domain custom-flannel-503505 has defined MAC address 52:54:00:31:8f:50 in network mk-custom-flannel-503505
	I1220 02:14:10.779240   37878 main.go:144] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:31:8f:50", ip: ""} in network mk-custom-flannel-503505: {Iface:virbr4 ExpiryTime:2025-12-20 03:13:37 +0000 UTC Type:0 Mac:52:54:00:31:8f:50 Iaid: IPaddr:192.168.72.110 Prefix:24 Hostname:custom-flannel-503505 Clientid:01:52:54:00:31:8f:50}
	I1220 02:14:10.779272   37878 main.go:144] libmachine: domain custom-flannel-503505 has defined IP address 192.168.72.110 and MAC address 52:54:00:31:8f:50 in network mk-custom-flannel-503505
	I1220 02:14:10.779776   37878 sshutil.go:53] new ssh client: &{IP:192.168.72.110 Port:22 SSHKeyPath:/home/minitest/minikube-integration/7cd9f41b7421760cf1f1eaa8725bdb975037b06d-7160/.minikube/machines/custom-flannel-503505/id_rsa Username:docker}
	I1220 02:14:10.780325   37878 main.go:144] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:31:8f:50", ip: ""} in network mk-custom-flannel-503505: {Iface:virbr4 ExpiryTime:2025-12-20 03:13:37 +0000 UTC Type:0 Mac:52:54:00:31:8f:50 Iaid: IPaddr:192.168.72.110 Prefix:24 Hostname:custom-flannel-503505 Clientid:01:52:54:00:31:8f:50}
	I1220 02:14:10.780367   37878 main.go:144] libmachine: domain custom-flannel-503505 has defined IP address 192.168.72.110 and MAC address 52:54:00:31:8f:50 in network mk-custom-flannel-503505
	I1220 02:14:10.780605   37878 sshutil.go:53] new ssh client: &{IP:192.168.72.110 Port:22 SSHKeyPath:/home/minitest/minikube-integration/7cd9f41b7421760cf1f1eaa8725bdb975037b06d-7160/.minikube/machines/custom-flannel-503505/id_rsa Username:docker}
	I1220 02:14:11.077940   37878 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.3/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.72.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.34.3/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -"
	I1220 02:14:11.193874   37878 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1220 02:14:11.505786   37878 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.3/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1220 02:14:11.514993   37878 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.3/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I1220 02:14:11.665088   37878 start.go:977] {"host.minikube.internal": 192.168.72.1} host record injected into CoreDNS's ConfigMap
	I1220 02:14:11.666520   37878 node_ready.go:35] waiting up to 15m0s for node "custom-flannel-503505" to be "Ready" ...
	I1220 02:14:12.188508   37878 kapi.go:214] "coredns" deployment in "kube-system" namespace and "custom-flannel-503505" context rescaled to 1 replicas
	I1220 02:14:12.198043   37878 out.go:179] * Enabled addons: storage-provisioner, default-storageclass
	I1220 02:14:08.191550   37762 system_pods.go:86] 9 kube-system pods found
	I1220 02:14:08.191589   37762 system_pods.go:89] "calico-kube-controllers-5c676f698c-5plhl" [8be42fe3-d58c-4bbf-9c51-8b689e28f671] Pending / Ready:ContainersNotReady (containers with unready status: [calico-kube-controllers]) / ContainersReady:ContainersNotReady (containers with unready status: [calico-kube-controllers])
	I1220 02:14:08.191605   37762 system_pods.go:89] "calico-node-d2xvk" [ebf1dd8e-ce6b-4cf7-a858-755e70edd320] Pending / Initialized:ContainersNotInitialized (containers with incomplete status: [ebpf-bootstrap]) / Ready:ContainersNotReady (containers with unready status: [calico-node]) / ContainersReady:ContainersNotReady (containers with unready status: [calico-node])
	I1220 02:14:08.191621   37762 system_pods.go:89] "coredns-66bc5c9577-hd2kg" [6f327eef-5be3-4358-8bf0-be3e0e9a13f1] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1220 02:14:08.191627   37762 system_pods.go:89] "etcd-calico-503505" [7fed5e30-e972-43f8-9ef7-7261b7216d7c] Running
	I1220 02:14:08.191633   37762 system_pods.go:89] "kube-apiserver-calico-503505" [b81cb7dd-efa9-428d-b155-4c8d4fcb5566] Running
	I1220 02:14:08.191639   37762 system_pods.go:89] "kube-controller-manager-calico-503505" [4a7a0d88-4103-46e7-8090-72b0e9d91c39] Running
	I1220 02:14:08.191645   37762 system_pods.go:89] "kube-proxy-gzr82" [6a0b0ea5-98c0-4762-9423-1c10dee4576e] Running
	I1220 02:14:08.191652   37762 system_pods.go:89] "kube-scheduler-calico-503505" [ecad390b-bd72-4f11-82fd-6544060a23c7] Running
	I1220 02:14:08.191661   37762 system_pods.go:89] "storage-provisioner" [742d00f3-5d72-488a-afa1-1fcd40398cf6] Running
	I1220 02:14:08.191680   37762 retry.go:31] will retry after 2.235844761s: missing components: kube-dns
	I1220 02:14:10.441962   37762 system_pods.go:86] 9 kube-system pods found
	I1220 02:14:10.442003   37762 system_pods.go:89] "calico-kube-controllers-5c676f698c-5plhl" [8be42fe3-d58c-4bbf-9c51-8b689e28f671] Pending / Ready:ContainersNotReady (containers with unready status: [calico-kube-controllers]) / ContainersReady:ContainersNotReady (containers with unready status: [calico-kube-controllers])
	I1220 02:14:10.442017   37762 system_pods.go:89] "calico-node-d2xvk" [ebf1dd8e-ce6b-4cf7-a858-755e70edd320] Pending / Initialized:ContainersNotInitialized (containers with incomplete status: [ebpf-bootstrap]) / Ready:ContainersNotReady (containers with unready status: [calico-node]) / ContainersReady:ContainersNotReady (containers with unready status: [calico-node])
	I1220 02:14:10.442028   37762 system_pods.go:89] "coredns-66bc5c9577-hd2kg" [6f327eef-5be3-4358-8bf0-be3e0e9a13f1] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1220 02:14:10.442035   37762 system_pods.go:89] "etcd-calico-503505" [7fed5e30-e972-43f8-9ef7-7261b7216d7c] Running
	I1220 02:14:10.442041   37762 system_pods.go:89] "kube-apiserver-calico-503505" [b81cb7dd-efa9-428d-b155-4c8d4fcb5566] Running
	I1220 02:14:10.442048   37762 system_pods.go:89] "kube-controller-manager-calico-503505" [4a7a0d88-4103-46e7-8090-72b0e9d91c39] Running
	I1220 02:14:10.442053   37762 system_pods.go:89] "kube-proxy-gzr82" [6a0b0ea5-98c0-4762-9423-1c10dee4576e] Running
	I1220 02:14:10.442059   37762 system_pods.go:89] "kube-scheduler-calico-503505" [ecad390b-bd72-4f11-82fd-6544060a23c7] Running
	I1220 02:14:10.442063   37762 system_pods.go:89] "storage-provisioner" [742d00f3-5d72-488a-afa1-1fcd40398cf6] Running
	I1220 02:14:10.442080   37762 retry.go:31] will retry after 3.072193082s: missing components: kube-dns
	I1220 02:14:12.199503   37878 addons.go:530] duration metric: took 1.431726471s for enable addons: enabled=[storage-provisioner default-storageclass]
	I1220 02:14:09.580950   38979 main.go:144] libmachine: domain false-503505 has defined MAC address 52:54:00:4e:1e:41 in network mk-false-503505
	I1220 02:14:09.581833   38979 main.go:144] libmachine: no network interface addresses found for domain false-503505 (source=lease)
	I1220 02:14:09.581857   38979 main.go:144] libmachine: trying to list again with source=arp
	I1220 02:14:09.582327   38979 main.go:144] libmachine: unable to find current IP address of domain false-503505 in network mk-false-503505 (interfaces detected: [])
	I1220 02:14:09.582367   38979 retry.go:31] will retry after 3.32332613s: waiting for domain to come up
	I1220 02:14:12.910036   38979 main.go:144] libmachine: domain false-503505 has defined MAC address 52:54:00:4e:1e:41 in network mk-false-503505
	I1220 02:14:12.911080   38979 main.go:144] libmachine: domain false-503505 has current primary IP address 192.168.61.177 and MAC address 52:54:00:4e:1e:41 in network mk-false-503505
	I1220 02:14:12.911099   38979 main.go:144] libmachine: found domain IP: 192.168.61.177
	I1220 02:14:12.911107   38979 main.go:144] libmachine: reserving static IP address...
	I1220 02:14:12.911656   38979 main.go:144] libmachine: unable to find host DHCP lease matching {name: "false-503505", mac: "52:54:00:4e:1e:41", ip: "192.168.61.177"} in network mk-false-503505
	I1220 02:14:13.162890   38979 main.go:144] libmachine: reserved static IP address 192.168.61.177 for domain false-503505
	I1220 02:14:13.162914   38979 main.go:144] libmachine: waiting for SSH...
	I1220 02:14:13.162921   38979 main.go:144] libmachine: Getting to WaitForSSH function...
	I1220 02:14:13.166240   38979 main.go:144] libmachine: domain false-503505 has defined MAC address 52:54:00:4e:1e:41 in network mk-false-503505
	I1220 02:14:13.166798   38979 main.go:144] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:4e:1e:41", ip: ""} in network mk-false-503505: {Iface:virbr3 ExpiryTime:2025-12-20 03:14:09 +0000 UTC Type:0 Mac:52:54:00:4e:1e:41 Iaid: IPaddr:192.168.61.177 Prefix:24 Hostname:minikube Clientid:01:52:54:00:4e:1e:41}
	I1220 02:14:13.166839   38979 main.go:144] libmachine: domain false-503505 has defined IP address 192.168.61.177 and MAC address 52:54:00:4e:1e:41 in network mk-false-503505
	I1220 02:14:13.167111   38979 main.go:144] libmachine: Using SSH client type: native
	I1220 02:14:13.167442   38979 main.go:144] libmachine: &{{{<nil> 0 [] [] []} docker [0x84dd20] 0x8509c0 <nil>  [] 0s} 192.168.61.177 22 <nil> <nil>}
	I1220 02:14:13.167462   38979 main.go:144] libmachine: About to run SSH command:
	exit 0
	I1220 02:14:13.287553   38979 main.go:144] libmachine: SSH cmd err, output: <nil>: 
	I1220 02:14:13.288033   38979 main.go:144] libmachine: domain creation complete
	I1220 02:14:13.289768   38979 machine.go:94] provisionDockerMachine start ...
	I1220 02:14:13.292967   38979 main.go:144] libmachine: domain false-503505 has defined MAC address 52:54:00:4e:1e:41 in network mk-false-503505
	I1220 02:14:13.293534   38979 main.go:144] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:4e:1e:41", ip: ""} in network mk-false-503505: {Iface:virbr3 ExpiryTime:2025-12-20 03:14:09 +0000 UTC Type:0 Mac:52:54:00:4e:1e:41 Iaid: IPaddr:192.168.61.177 Prefix:24 Hostname:false-503505 Clientid:01:52:54:00:4e:1e:41}
	I1220 02:14:13.293566   38979 main.go:144] libmachine: domain false-503505 has defined IP address 192.168.61.177 and MAC address 52:54:00:4e:1e:41 in network mk-false-503505
	I1220 02:14:13.293831   38979 main.go:144] libmachine: Using SSH client type: native
	I1220 02:14:13.294091   38979 main.go:144] libmachine: &{{{<nil> 0 [] [] []} docker [0x84dd20] 0x8509c0 <nil>  [] 0s} 192.168.61.177 22 <nil> <nil>}
	I1220 02:14:13.294106   38979 main.go:144] libmachine: About to run SSH command:
	hostname
	I1220 02:14:13.408900   38979 main.go:144] libmachine: SSH cmd err, output: <nil>: minikube
	
	I1220 02:14:13.408931   38979 buildroot.go:166] provisioning hostname "false-503505"
	I1220 02:14:13.412183   38979 main.go:144] libmachine: domain false-503505 has defined MAC address 52:54:00:4e:1e:41 in network mk-false-503505
	I1220 02:14:13.412723   38979 main.go:144] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:4e:1e:41", ip: ""} in network mk-false-503505: {Iface:virbr3 ExpiryTime:2025-12-20 03:14:09 +0000 UTC Type:0 Mac:52:54:00:4e:1e:41 Iaid: IPaddr:192.168.61.177 Prefix:24 Hostname:false-503505 Clientid:01:52:54:00:4e:1e:41}
	I1220 02:14:13.412747   38979 main.go:144] libmachine: domain false-503505 has defined IP address 192.168.61.177 and MAC address 52:54:00:4e:1e:41 in network mk-false-503505
	I1220 02:14:13.412990   38979 main.go:144] libmachine: Using SSH client type: native
	I1220 02:14:13.413194   38979 main.go:144] libmachine: &{{{<nil> 0 [] [] []} docker [0x84dd20] 0x8509c0 <nil>  [] 0s} 192.168.61.177 22 <nil> <nil>}
	I1220 02:14:13.413235   38979 main.go:144] libmachine: About to run SSH command:
	sudo hostname false-503505 && echo "false-503505" | sudo tee /etc/hostname
	I1220 02:14:13.545519   38979 main.go:144] libmachine: SSH cmd err, output: <nil>: false-503505
	
	I1220 02:14:13.548500   38979 main.go:144] libmachine: domain false-503505 has defined MAC address 52:54:00:4e:1e:41 in network mk-false-503505
	I1220 02:14:13.548973   38979 main.go:144] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:4e:1e:41", ip: ""} in network mk-false-503505: {Iface:virbr3 ExpiryTime:2025-12-20 03:14:09 +0000 UTC Type:0 Mac:52:54:00:4e:1e:41 Iaid: IPaddr:192.168.61.177 Prefix:24 Hostname:false-503505 Clientid:01:52:54:00:4e:1e:41}
	I1220 02:14:13.549006   38979 main.go:144] libmachine: domain false-503505 has defined IP address 192.168.61.177 and MAC address 52:54:00:4e:1e:41 in network mk-false-503505
	I1220 02:14:13.549225   38979 main.go:144] libmachine: Using SSH client type: native
	I1220 02:14:13.549497   38979 main.go:144] libmachine: &{{{<nil> 0 [] [] []} docker [0x84dd20] 0x8509c0 <nil>  [] 0s} 192.168.61.177 22 <nil> <nil>}
	I1220 02:14:13.549521   38979 main.go:144] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sfalse-503505' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 false-503505/g' /etc/hosts;
				else 
					echo '127.0.1.1 false-503505' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1220 02:14:13.522551   37762 system_pods.go:86] 9 kube-system pods found
	I1220 02:14:13.522594   37762 system_pods.go:89] "calico-kube-controllers-5c676f698c-5plhl" [8be42fe3-d58c-4bbf-9c51-8b689e28f671] Pending / Ready:ContainersNotReady (containers with unready status: [calico-kube-controllers]) / ContainersReady:ContainersNotReady (containers with unready status: [calico-kube-controllers])
	I1220 02:14:13.522608   37762 system_pods.go:89] "calico-node-d2xvk" [ebf1dd8e-ce6b-4cf7-a858-755e70edd320] Pending / Ready:ContainersNotReady (containers with unready status: [calico-node]) / ContainersReady:ContainersNotReady (containers with unready status: [calico-node])
	I1220 02:14:13.522618   37762 system_pods.go:89] "coredns-66bc5c9577-hd2kg" [6f327eef-5be3-4358-8bf0-be3e0e9a13f1] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1220 02:14:13.522624   37762 system_pods.go:89] "etcd-calico-503505" [7fed5e30-e972-43f8-9ef7-7261b7216d7c] Running
	I1220 02:14:13.522630   37762 system_pods.go:89] "kube-apiserver-calico-503505" [b81cb7dd-efa9-428d-b155-4c8d4fcb5566] Running
	I1220 02:14:13.522633   37762 system_pods.go:89] "kube-controller-manager-calico-503505" [4a7a0d88-4103-46e7-8090-72b0e9d91c39] Running
	I1220 02:14:13.522638   37762 system_pods.go:89] "kube-proxy-gzr82" [6a0b0ea5-98c0-4762-9423-1c10dee4576e] Running
	I1220 02:14:13.522643   37762 system_pods.go:89] "kube-scheduler-calico-503505" [ecad390b-bd72-4f11-82fd-6544060a23c7] Running
	I1220 02:14:13.522648   37762 system_pods.go:89] "storage-provisioner" [742d00f3-5d72-488a-afa1-1fcd40398cf6] Running
	I1220 02:14:13.522671   37762 retry.go:31] will retry after 2.893940025s: missing components: kube-dns
	I1220 02:14:16.427761   37762 system_pods.go:86] 9 kube-system pods found
	I1220 02:14:16.427804   37762 system_pods.go:89] "calico-kube-controllers-5c676f698c-5plhl" [8be42fe3-d58c-4bbf-9c51-8b689e28f671] Pending / Ready:ContainersNotReady (containers with unready status: [calico-kube-controllers]) / ContainersReady:ContainersNotReady (containers with unready status: [calico-kube-controllers])
	I1220 02:14:16.427822   37762 system_pods.go:89] "calico-node-d2xvk" [ebf1dd8e-ce6b-4cf7-a858-755e70edd320] Running / Ready:ContainersNotReady (containers with unready status: [calico-node]) / ContainersReady:ContainersNotReady (containers with unready status: [calico-node])
	I1220 02:14:16.427834   37762 system_pods.go:89] "coredns-66bc5c9577-hd2kg" [6f327eef-5be3-4358-8bf0-be3e0e9a13f1] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1220 02:14:16.427841   37762 system_pods.go:89] "etcd-calico-503505" [7fed5e30-e972-43f8-9ef7-7261b7216d7c] Running
	I1220 02:14:16.427847   37762 system_pods.go:89] "kube-apiserver-calico-503505" [b81cb7dd-efa9-428d-b155-4c8d4fcb5566] Running
	I1220 02:14:16.427857   37762 system_pods.go:89] "kube-controller-manager-calico-503505" [4a7a0d88-4103-46e7-8090-72b0e9d91c39] Running
	I1220 02:14:16.427863   37762 system_pods.go:89] "kube-proxy-gzr82" [6a0b0ea5-98c0-4762-9423-1c10dee4576e] Running
	I1220 02:14:16.427876   37762 system_pods.go:89] "kube-scheduler-calico-503505" [ecad390b-bd72-4f11-82fd-6544060a23c7] Running
	I1220 02:14:16.427881   37762 system_pods.go:89] "storage-provisioner" [742d00f3-5d72-488a-afa1-1fcd40398cf6] Running
	I1220 02:14:16.427898   37762 retry.go:31] will retry after 5.028189083s: missing components: kube-dns
	W1220 02:14:13.671217   37878 node_ready.go:57] node "custom-flannel-503505" has "Ready":"False" status (will retry)
	W1220 02:14:16.172759   37878 node_ready.go:57] node "custom-flannel-503505" has "Ready":"False" status (will retry)
	I1220 02:14:13.683279   38979 main.go:144] libmachine: SSH cmd err, output: <nil>: 
	I1220 02:14:13.683320   38979 buildroot.go:172] set auth options {CertDir:/home/minitest/minikube-integration/7cd9f41b7421760cf1f1eaa8725bdb975037b06d-7160/.minikube CaCertPath:/home/minitest/minikube-integration/7cd9f41b7421760cf1f1eaa8725bdb975037b06d-7160/.minikube/certs/ca.pem CaPrivateKeyPath:/home/minitest/minikube-integration/7cd9f41b7421760cf1f1eaa8725bdb975037b06d-7160/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/minitest/minikube-integration/7cd9f41b7421760cf1f1eaa8725bdb975037b06d-7160/.minikube/machines/server.pem ServerKeyPath:/home/minitest/minikube-integration/7cd9f41b7421760cf1f1eaa8725bdb975037b06d-7160/.minikube/machines/server-key.pem ClientKeyPath:/home/minitest/minikube-integration/7cd9f41b7421760cf1f1eaa8725bdb975037b06d-7160/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/minitest/minikube-integration/7cd9f41b7421760cf1f1eaa8725bdb975037b06d-7160/.minik
ube/certs/cert.pem ServerCertSANs:[] StorePath:/home/minitest/minikube-integration/7cd9f41b7421760cf1f1eaa8725bdb975037b06d-7160/.minikube}
	I1220 02:14:13.683376   38979 buildroot.go:174] setting up certificates
	I1220 02:14:13.683393   38979 provision.go:84] configureAuth start
	I1220 02:14:13.687478   38979 main.go:144] libmachine: domain false-503505 has defined MAC address 52:54:00:4e:1e:41 in network mk-false-503505
	I1220 02:14:13.688091   38979 main.go:144] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:4e:1e:41", ip: ""} in network mk-false-503505: {Iface:virbr3 ExpiryTime:2025-12-20 03:14:09 +0000 UTC Type:0 Mac:52:54:00:4e:1e:41 Iaid: IPaddr:192.168.61.177 Prefix:24 Hostname:false-503505 Clientid:01:52:54:00:4e:1e:41}
	I1220 02:14:13.688126   38979 main.go:144] libmachine: domain false-503505 has defined IP address 192.168.61.177 and MAC address 52:54:00:4e:1e:41 in network mk-false-503505
	I1220 02:14:13.691975   38979 main.go:144] libmachine: domain false-503505 has defined MAC address 52:54:00:4e:1e:41 in network mk-false-503505
	I1220 02:14:13.692656   38979 main.go:144] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:4e:1e:41", ip: ""} in network mk-false-503505: {Iface:virbr3 ExpiryTime:2025-12-20 03:14:09 +0000 UTC Type:0 Mac:52:54:00:4e:1e:41 Iaid: IPaddr:192.168.61.177 Prefix:24 Hostname:false-503505 Clientid:01:52:54:00:4e:1e:41}
	I1220 02:14:13.692715   38979 main.go:144] libmachine: domain false-503505 has defined IP address 192.168.61.177 and MAC address 52:54:00:4e:1e:41 in network mk-false-503505
	I1220 02:14:13.692969   38979 provision.go:143] copyHostCerts
	I1220 02:14:13.693049   38979 exec_runner.go:144] found /home/minitest/minikube-integration/7cd9f41b7421760cf1f1eaa8725bdb975037b06d-7160/.minikube/ca.pem, removing ...
	I1220 02:14:13.693064   38979 exec_runner.go:203] rm: /home/minitest/minikube-integration/7cd9f41b7421760cf1f1eaa8725bdb975037b06d-7160/.minikube/ca.pem
	I1220 02:14:13.693154   38979 exec_runner.go:151] cp: /home/minitest/minikube-integration/7cd9f41b7421760cf1f1eaa8725bdb975037b06d-7160/.minikube/certs/ca.pem --> /home/minitest/minikube-integration/7cd9f41b7421760cf1f1eaa8725bdb975037b06d-7160/.minikube/ca.pem (1082 bytes)
	I1220 02:14:13.693360   38979 exec_runner.go:144] found /home/minitest/minikube-integration/7cd9f41b7421760cf1f1eaa8725bdb975037b06d-7160/.minikube/cert.pem, removing ...
	I1220 02:14:13.693377   38979 exec_runner.go:203] rm: /home/minitest/minikube-integration/7cd9f41b7421760cf1f1eaa8725bdb975037b06d-7160/.minikube/cert.pem
	I1220 02:14:13.693441   38979 exec_runner.go:151] cp: /home/minitest/minikube-integration/7cd9f41b7421760cf1f1eaa8725bdb975037b06d-7160/.minikube/certs/cert.pem --> /home/minitest/minikube-integration/7cd9f41b7421760cf1f1eaa8725bdb975037b06d-7160/.minikube/cert.pem (1127 bytes)
	I1220 02:14:13.693548   38979 exec_runner.go:144] found /home/minitest/minikube-integration/7cd9f41b7421760cf1f1eaa8725bdb975037b06d-7160/.minikube/key.pem, removing ...
	I1220 02:14:13.693560   38979 exec_runner.go:203] rm: /home/minitest/minikube-integration/7cd9f41b7421760cf1f1eaa8725bdb975037b06d-7160/.minikube/key.pem
	I1220 02:14:13.693612   38979 exec_runner.go:151] cp: /home/minitest/minikube-integration/7cd9f41b7421760cf1f1eaa8725bdb975037b06d-7160/.minikube/certs/key.pem --> /home/minitest/minikube-integration/7cd9f41b7421760cf1f1eaa8725bdb975037b06d-7160/.minikube/key.pem (1675 bytes)
	I1220 02:14:13.693705   38979 provision.go:117] generating server cert: /home/minitest/minikube-integration/7cd9f41b7421760cf1f1eaa8725bdb975037b06d-7160/.minikube/machines/server.pem ca-key=/home/minitest/minikube-integration/7cd9f41b7421760cf1f1eaa8725bdb975037b06d-7160/.minikube/certs/ca.pem private-key=/home/minitest/minikube-integration/7cd9f41b7421760cf1f1eaa8725bdb975037b06d-7160/.minikube/certs/ca-key.pem org=minitest.false-503505 san=[127.0.0.1 192.168.61.177 false-503505 localhost minikube]
	I1220 02:14:13.709086   38979 provision.go:177] copyRemoteCerts
	I1220 02:14:13.709144   38979 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1220 02:14:13.713124   38979 main.go:144] libmachine: domain false-503505 has defined MAC address 52:54:00:4e:1e:41 in network mk-false-503505
	I1220 02:14:13.713703   38979 main.go:144] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:4e:1e:41", ip: ""} in network mk-false-503505: {Iface:virbr3 ExpiryTime:2025-12-20 03:14:09 +0000 UTC Type:0 Mac:52:54:00:4e:1e:41 Iaid: IPaddr:192.168.61.177 Prefix:24 Hostname:false-503505 Clientid:01:52:54:00:4e:1e:41}
	I1220 02:14:13.713755   38979 main.go:144] libmachine: domain false-503505 has defined IP address 192.168.61.177 and MAC address 52:54:00:4e:1e:41 in network mk-false-503505
	I1220 02:14:13.713967   38979 sshutil.go:53] new ssh client: &{IP:192.168.61.177 Port:22 SSHKeyPath:/home/minitest/minikube-integration/7cd9f41b7421760cf1f1eaa8725bdb975037b06d-7160/.minikube/machines/false-503505/id_rsa Username:docker}
	I1220 02:14:13.809584   38979 ssh_runner.go:362] scp /home/minitest/minikube-integration/7cd9f41b7421760cf1f1eaa8725bdb975037b06d-7160/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I1220 02:14:13.845246   38979 ssh_runner.go:362] scp /home/minitest/minikube-integration/7cd9f41b7421760cf1f1eaa8725bdb975037b06d-7160/.minikube/machines/server.pem --> /etc/docker/server.pem (1208 bytes)
	I1220 02:14:13.881465   38979 ssh_runner.go:362] scp /home/minitest/minikube-integration/7cd9f41b7421760cf1f1eaa8725bdb975037b06d-7160/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I1220 02:14:13.915284   38979 provision.go:87] duration metric: took 231.876161ms to configureAuth
	I1220 02:14:13.915334   38979 buildroot.go:189] setting minikube options for container-runtime
	I1220 02:14:13.915608   38979 config.go:182] Loaded profile config "false-503505": Driver=kvm2, ContainerRuntime=docker, KubernetesVersion=v1.34.3
	I1220 02:14:13.919150   38979 main.go:144] libmachine: domain false-503505 has defined MAC address 52:54:00:4e:1e:41 in network mk-false-503505
	I1220 02:14:13.919807   38979 main.go:144] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:4e:1e:41", ip: ""} in network mk-false-503505: {Iface:virbr3 ExpiryTime:2025-12-20 03:14:09 +0000 UTC Type:0 Mac:52:54:00:4e:1e:41 Iaid: IPaddr:192.168.61.177 Prefix:24 Hostname:false-503505 Clientid:01:52:54:00:4e:1e:41}
	I1220 02:14:13.919851   38979 main.go:144] libmachine: domain false-503505 has defined IP address 192.168.61.177 and MAC address 52:54:00:4e:1e:41 in network mk-false-503505
	I1220 02:14:13.920156   38979 main.go:144] libmachine: Using SSH client type: native
	I1220 02:14:13.920492   38979 main.go:144] libmachine: &{{{<nil> 0 [] [] []} docker [0x84dd20] 0x8509c0 <nil>  [] 0s} 192.168.61.177 22 <nil> <nil>}
	I1220 02:14:13.920559   38979 main.go:144] libmachine: About to run SSH command:
	df --output=fstype / | tail -n 1
	I1220 02:14:14.043505   38979 main.go:144] libmachine: SSH cmd err, output: <nil>: tmpfs
	
	I1220 02:14:14.043553   38979 buildroot.go:70] root file system type: tmpfs
	I1220 02:14:14.043717   38979 provision.go:314] Updating docker unit: /lib/systemd/system/docker.service ...
	I1220 02:14:14.047676   38979 main.go:144] libmachine: domain false-503505 has defined MAC address 52:54:00:4e:1e:41 in network mk-false-503505
	I1220 02:14:14.048130   38979 main.go:144] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:4e:1e:41", ip: ""} in network mk-false-503505: {Iface:virbr3 ExpiryTime:2025-12-20 03:14:09 +0000 UTC Type:0 Mac:52:54:00:4e:1e:41 Iaid: IPaddr:192.168.61.177 Prefix:24 Hostname:false-503505 Clientid:01:52:54:00:4e:1e:41}
	I1220 02:14:14.048163   38979 main.go:144] libmachine: domain false-503505 has defined IP address 192.168.61.177 and MAC address 52:54:00:4e:1e:41 in network mk-false-503505
	I1220 02:14:14.048457   38979 main.go:144] libmachine: Using SSH client type: native
	I1220 02:14:14.048704   38979 main.go:144] libmachine: &{{{<nil> 0 [] [] []} docker [0x84dd20] 0x8509c0 <nil>  [] 0s} 192.168.61.177 22 <nil> <nil>}
	I1220 02:14:14.048784   38979 main.go:144] libmachine: About to run SSH command:
	sudo mkdir -p /lib/systemd/system && printf %s "[Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	After=network-online.target minikube-automount.service nss-lookup.target docker.socket firewalld.service containerd.service time-set.target
	Wants=network-online.target containerd.service
	Requires=docker.socket
	StartLimitBurst=3
	StartLimitIntervalSec=60
	[Service]
	Type=notify
	Restart=always
	
	
	
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 \
		-H fd:// --containerd=/run/containerd/containerd.sock \
		-H unix:///var/run/docker.sock \
		--default-ulimit=nofile=1048576:1048576 \
		--tlsverify \
		--tlscacert /etc/docker/ca.pem \
		--tlscert /etc/docker/server.pem \
		--tlskey /etc/docker/server-key.pem --label provider=kvm2 --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP \$MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	OOMScoreAdjust=-500
	
	[Install]
	WantedBy=multi-user.target
	" | sudo tee /lib/systemd/system/docker.service.new
	I1220 02:14:14.192756   38979 main.go:144] libmachine: SSH cmd err, output: <nil>: [Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	After=network-online.target minikube-automount.service nss-lookup.target docker.socket firewalld.service containerd.service time-set.target
	Wants=network-online.target containerd.service
	Requires=docker.socket
	StartLimitBurst=3
	StartLimitIntervalSec=60
	[Service]
	Type=notify
	Restart=always
	
	
	
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 	-H fd:// --containerd=/run/containerd/containerd.sock 	-H unix:///var/run/docker.sock 	--default-ulimit=nofile=1048576:1048576 	--tlsverify 	--tlscacert /etc/docker/ca.pem 	--tlscert /etc/docker/server.pem 	--tlskey /etc/docker/server-key.pem --label provider=kvm2 --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP $MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	OOMScoreAdjust=-500
	
	[Install]
	WantedBy=multi-user.target
	
	I1220 02:14:14.196528   38979 main.go:144] libmachine: domain false-503505 has defined MAC address 52:54:00:4e:1e:41 in network mk-false-503505
	I1220 02:14:14.197071   38979 main.go:144] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:4e:1e:41", ip: ""} in network mk-false-503505: {Iface:virbr3 ExpiryTime:2025-12-20 03:14:09 +0000 UTC Type:0 Mac:52:54:00:4e:1e:41 Iaid: IPaddr:192.168.61.177 Prefix:24 Hostname:false-503505 Clientid:01:52:54:00:4e:1e:41}
	I1220 02:14:14.197103   38979 main.go:144] libmachine: domain false-503505 has defined IP address 192.168.61.177 and MAC address 52:54:00:4e:1e:41 in network mk-false-503505
	I1220 02:14:14.197379   38979 main.go:144] libmachine: Using SSH client type: native
	I1220 02:14:14.197658   38979 main.go:144] libmachine: &{{{<nil> 0 [] [] []} docker [0x84dd20] 0x8509c0 <nil>  [] 0s} 192.168.61.177 22 <nil> <nil>}
	I1220 02:14:14.197687   38979 main.go:144] libmachine: About to run SSH command:
	sudo diff -u /lib/systemd/system/docker.service /lib/systemd/system/docker.service.new || { sudo mv /lib/systemd/system/docker.service.new /lib/systemd/system/docker.service; sudo systemctl -f daemon-reload && sudo systemctl -f enable docker && sudo systemctl -f restart docker; }
	I1220 02:14:15.322369   38979 main.go:144] libmachine: SSH cmd err, output: <nil>: diff: can't stat '/lib/systemd/system/docker.service': No such file or directory
	Created symlink '/etc/systemd/system/multi-user.target.wants/docker.service' → '/usr/lib/systemd/system/docker.service'.
	
	I1220 02:14:15.322395   38979 machine.go:97] duration metric: took 2.032605943s to provisionDockerMachine
	I1220 02:14:15.322407   38979 client.go:176] duration metric: took 21.59897051s to LocalClient.Create
	I1220 02:14:15.322422   38979 start.go:167] duration metric: took 21.599041943s to libmachine.API.Create "false-503505"
	I1220 02:14:15.322430   38979 start.go:293] postStartSetup for "false-503505" (driver="kvm2")
	I1220 02:14:15.322443   38979 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1220 02:14:15.322513   38979 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1220 02:14:15.325726   38979 main.go:144] libmachine: domain false-503505 has defined MAC address 52:54:00:4e:1e:41 in network mk-false-503505
	I1220 02:14:15.326187   38979 main.go:144] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:4e:1e:41", ip: ""} in network mk-false-503505: {Iface:virbr3 ExpiryTime:2025-12-20 03:14:09 +0000 UTC Type:0 Mac:52:54:00:4e:1e:41 Iaid: IPaddr:192.168.61.177 Prefix:24 Hostname:false-503505 Clientid:01:52:54:00:4e:1e:41}
	I1220 02:14:15.326227   38979 main.go:144] libmachine: domain false-503505 has defined IP address 192.168.61.177 and MAC address 52:54:00:4e:1e:41 in network mk-false-503505
	I1220 02:14:15.326423   38979 sshutil.go:53] new ssh client: &{IP:192.168.61.177 Port:22 SSHKeyPath:/home/minitest/minikube-integration/7cd9f41b7421760cf1f1eaa8725bdb975037b06d-7160/.minikube/machines/false-503505/id_rsa Username:docker}
	I1220 02:14:15.421695   38979 ssh_runner.go:195] Run: cat /etc/os-release
	I1220 02:14:15.426952   38979 info.go:137] Remote host: Buildroot 2025.02
	I1220 02:14:15.426987   38979 filesync.go:126] Scanning /home/minitest/minikube-integration/7cd9f41b7421760cf1f1eaa8725bdb975037b06d-7160/.minikube/addons for local assets ...
	I1220 02:14:15.427077   38979 filesync.go:126] Scanning /home/minitest/minikube-integration/7cd9f41b7421760cf1f1eaa8725bdb975037b06d-7160/.minikube/files for local assets ...
	I1220 02:14:15.427228   38979 filesync.go:149] local asset: /home/minitest/minikube-integration/7cd9f41b7421760cf1f1eaa8725bdb975037b06d-7160/.minikube/files/etc/ssl/certs/130182.pem -> 130182.pem in /etc/ssl/certs
	I1220 02:14:15.427399   38979 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I1220 02:14:15.440683   38979 ssh_runner.go:362] scp /home/minitest/minikube-integration/7cd9f41b7421760cf1f1eaa8725bdb975037b06d-7160/.minikube/files/etc/ssl/certs/130182.pem --> /etc/ssl/certs/130182.pem (1708 bytes)
	I1220 02:14:15.472751   38979 start.go:296] duration metric: took 150.304753ms for postStartSetup
	I1220 02:14:15.476375   38979 main.go:144] libmachine: domain false-503505 has defined MAC address 52:54:00:4e:1e:41 in network mk-false-503505
	I1220 02:14:15.476839   38979 main.go:144] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:4e:1e:41", ip: ""} in network mk-false-503505: {Iface:virbr3 ExpiryTime:2025-12-20 03:14:09 +0000 UTC Type:0 Mac:52:54:00:4e:1e:41 Iaid: IPaddr:192.168.61.177 Prefix:24 Hostname:false-503505 Clientid:01:52:54:00:4e:1e:41}
	I1220 02:14:15.476864   38979 main.go:144] libmachine: domain false-503505 has defined IP address 192.168.61.177 and MAC address 52:54:00:4e:1e:41 in network mk-false-503505
	I1220 02:14:15.477147   38979 profile.go:143] Saving config to /home/minitest/minikube-integration/7cd9f41b7421760cf1f1eaa8725bdb975037b06d-7160/.minikube/profiles/false-503505/config.json ...
	I1220 02:14:15.477371   38979 start.go:128] duration metric: took 21.756169074s to createHost
	I1220 02:14:15.480134   38979 main.go:144] libmachine: domain false-503505 has defined MAC address 52:54:00:4e:1e:41 in network mk-false-503505
	I1220 02:14:15.480583   38979 main.go:144] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:4e:1e:41", ip: ""} in network mk-false-503505: {Iface:virbr3 ExpiryTime:2025-12-20 03:14:09 +0000 UTC Type:0 Mac:52:54:00:4e:1e:41 Iaid: IPaddr:192.168.61.177 Prefix:24 Hostname:false-503505 Clientid:01:52:54:00:4e:1e:41}
	I1220 02:14:15.480606   38979 main.go:144] libmachine: domain false-503505 has defined IP address 192.168.61.177 and MAC address 52:54:00:4e:1e:41 in network mk-false-503505
	I1220 02:14:15.480814   38979 main.go:144] libmachine: Using SSH client type: native
	I1220 02:14:15.481047   38979 main.go:144] libmachine: &{{{<nil> 0 [] [] []} docker [0x84dd20] 0x8509c0 <nil>  [] 0s} 192.168.61.177 22 <nil> <nil>}
	I1220 02:14:15.481060   38979 main.go:144] libmachine: About to run SSH command:
	date +%s.%N
	I1220 02:14:15.603682   38979 main.go:144] libmachine: SSH cmd err, output: <nil>: 1766196855.575822881
	
	I1220 02:14:15.603714   38979 fix.go:216] guest clock: 1766196855.575822881
	I1220 02:14:15.603726   38979 fix.go:229] Guest: 2025-12-20 02:14:15.575822881 +0000 UTC Remote: 2025-12-20 02:14:15.477389482 +0000 UTC m=+21.885083527 (delta=98.433399ms)
	I1220 02:14:15.603749   38979 fix.go:200] guest clock delta is within tolerance: 98.433399ms
	I1220 02:14:15.603770   38979 start.go:83] releasing machines lock for "false-503505", held for 21.882663608s
	I1220 02:14:15.607369   38979 main.go:144] libmachine: domain false-503505 has defined MAC address 52:54:00:4e:1e:41 in network mk-false-503505
	I1220 02:14:15.607986   38979 main.go:144] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:4e:1e:41", ip: ""} in network mk-false-503505: {Iface:virbr3 ExpiryTime:2025-12-20 03:14:09 +0000 UTC Type:0 Mac:52:54:00:4e:1e:41 Iaid: IPaddr:192.168.61.177 Prefix:24 Hostname:false-503505 Clientid:01:52:54:00:4e:1e:41}
	I1220 02:14:15.608024   38979 main.go:144] libmachine: domain false-503505 has defined IP address 192.168.61.177 and MAC address 52:54:00:4e:1e:41 in network mk-false-503505
	I1220 02:14:15.608687   38979 ssh_runner.go:195] Run: cat /version.json
	I1220 02:14:15.608792   38979 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1220 02:14:15.612782   38979 main.go:144] libmachine: domain false-503505 has defined MAC address 52:54:00:4e:1e:41 in network mk-false-503505
	I1220 02:14:15.613294   38979 main.go:144] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:4e:1e:41", ip: ""} in network mk-false-503505: {Iface:virbr3 ExpiryTime:2025-12-20 03:14:09 +0000 UTC Type:0 Mac:52:54:00:4e:1e:41 Iaid: IPaddr:192.168.61.177 Prefix:24 Hostname:false-503505 Clientid:01:52:54:00:4e:1e:41}
	I1220 02:14:15.613342   38979 main.go:144] libmachine: domain false-503505 has defined IP address 192.168.61.177 and MAC address 52:54:00:4e:1e:41 in network mk-false-503505
	I1220 02:14:15.613436   38979 main.go:144] libmachine: domain false-503505 has defined MAC address 52:54:00:4e:1e:41 in network mk-false-503505
	I1220 02:14:15.613556   38979 sshutil.go:53] new ssh client: &{IP:192.168.61.177 Port:22 SSHKeyPath:/home/minitest/minikube-integration/7cd9f41b7421760cf1f1eaa8725bdb975037b06d-7160/.minikube/machines/false-503505/id_rsa Username:docker}
	I1220 02:14:15.614074   38979 main.go:144] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:4e:1e:41", ip: ""} in network mk-false-503505: {Iface:virbr3 ExpiryTime:2025-12-20 03:14:09 +0000 UTC Type:0 Mac:52:54:00:4e:1e:41 Iaid: IPaddr:192.168.61.177 Prefix:24 Hostname:false-503505 Clientid:01:52:54:00:4e:1e:41}
	I1220 02:14:15.614107   38979 main.go:144] libmachine: domain false-503505 has defined IP address 192.168.61.177 and MAC address 52:54:00:4e:1e:41 in network mk-false-503505
	I1220 02:14:15.614392   38979 sshutil.go:53] new ssh client: &{IP:192.168.61.177 Port:22 SSHKeyPath:/home/minitest/minikube-integration/7cd9f41b7421760cf1f1eaa8725bdb975037b06d-7160/.minikube/machines/false-503505/id_rsa Username:docker}
	I1220 02:14:15.700660   38979 ssh_runner.go:195] Run: systemctl --version
	I1220 02:14:15.725011   38979 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W1220 02:14:15.731935   38979 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1220 02:14:15.732099   38979 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f -name *bridge* -not -name *podman* -not -name *.mk_disabled -printf "%p, " -exec sh -c "sudo sed -i -r -e '/"dst": ".*:.*"/d' -e 's|^(.*)"dst": (.*)[,*]$|\1"dst": \2|g' -e '/"subnet": ".*:.*"/d' -e 's|^(.*)"subnet": ".*"(.*)[,*]$|\1"subnet": "10.244.0.0/16"\2|g' {}" ;
	I1220 02:14:15.744444   38979 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f -name *podman* -not -name *.mk_disabled -printf "%p, " -exec sh -c "sudo sed -i -r -e 's|^(.*)"subnet": ".*"(.*)$|\1"subnet": "10.244.0.0/16"\2|g' -e 's|^(.*)"gateway": ".*"(.*)$|\1"gateway": "10.244.0.1"\2|g' {}" ;
	I1220 02:14:15.768292   38979 cni.go:308] configured [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I1220 02:14:15.768338   38979 start.go:496] detecting cgroup driver to use...
	I1220 02:14:15.768490   38979 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///run/containerd/containerd.sock
	" | sudo tee /etc/crictl.yaml"
	I1220 02:14:15.808234   38979 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)sandbox_image = .*$|\1sandbox_image = "registry.k8s.io/pause:3.10.1"|' /etc/containerd/config.toml"
	I1220 02:14:15.830328   38979 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)restrict_oom_score_adj = .*$|\1restrict_oom_score_adj = false|' /etc/containerd/config.toml"
	I1220 02:14:15.848439   38979 containerd.go:146] configuring containerd to use "cgroupfs" as cgroup driver...
	I1220 02:14:15.848537   38979 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)SystemdCgroup = .*$|\1SystemdCgroup = false|g' /etc/containerd/config.toml"
	I1220 02:14:15.865682   38979 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runtime.v1.linux"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I1220 02:14:15.887500   38979 ssh_runner.go:195] Run: sh -c "sudo sed -i '/systemd_cgroup/d' /etc/containerd/config.toml"
	I1220 02:14:15.906005   38979 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runc.v1"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I1220 02:14:15.925461   38979 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1220 02:14:15.940692   38979 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)conf_dir = .*$|\1conf_dir = "/etc/cni/net.d"|g' /etc/containerd/config.toml"
	I1220 02:14:15.959326   38979 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *enable_unprivileged_ports = .*/d' /etc/containerd/config.toml"
	I1220 02:14:15.978291   38979 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)\[plugins."io.containerd.grpc.v1.cri"\]|&\n\1  enable_unprivileged_ports = true|' /etc/containerd/config.toml"
	I1220 02:14:15.997878   38979 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1220 02:14:16.014027   38979 crio.go:166] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 1
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I1220 02:14:16.014121   38979 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I1220 02:14:16.033465   38979 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1220 02:14:16.050354   38979 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1220 02:14:16.231792   38979 ssh_runner.go:195] Run: sudo systemctl restart containerd
	I1220 02:14:16.289416   38979 start.go:496] detecting cgroup driver to use...
	I1220 02:14:16.289528   38979 ssh_runner.go:195] Run: sudo systemctl cat docker.service
	I1220 02:14:16.314852   38979 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1220 02:14:16.343915   38979 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I1220 02:14:16.373499   38979 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1220 02:14:16.393749   38979 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I1220 02:14:16.415218   38979 ssh_runner.go:195] Run: sudo systemctl stop -f crio
	I1220 02:14:16.448678   38979 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I1220 02:14:16.471638   38979 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/cri-dockerd.sock
	" | sudo tee /etc/crictl.yaml"
	I1220 02:14:16.499850   38979 ssh_runner.go:195] Run: which cri-dockerd
	I1220 02:14:16.505358   38979 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/cri-docker.service.d
	I1220 02:14:16.518773   38979 ssh_runner.go:362] scp memory --> /etc/systemd/system/cri-docker.service.d/10-cni.conf (192 bytes)
	I1220 02:14:16.542267   38979 ssh_runner.go:195] Run: sudo systemctl unmask docker.service
	I1220 02:14:16.744157   38979 ssh_runner.go:195] Run: sudo systemctl enable docker.socket
	I1220 02:14:16.924495   38979 docker.go:575] configuring docker to use "cgroupfs" as cgroup driver...
	I1220 02:14:16.924658   38979 ssh_runner.go:362] scp memory --> /etc/docker/daemon.json (130 bytes)
	I1220 02:14:16.953858   38979 ssh_runner.go:195] Run: sudo systemctl reset-failed docker
	I1220 02:14:16.973889   38979 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1220 02:14:17.180489   38979 ssh_runner.go:195] Run: sudo systemctl restart docker
	I1220 02:14:17.720891   38979 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1220 02:14:17.740432   38979 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service cri-docker.socket
	I1220 02:14:17.756728   38979 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service cri-docker.service
	I1220 02:14:17.780803   38979 ssh_runner.go:195] Run: sudo systemctl unmask cri-docker.socket
	I1220 02:14:17.958835   38979 ssh_runner.go:195] Run: sudo systemctl enable cri-docker.socket
	I1220 02:14:18.121422   38979 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1220 02:14:18.283915   38979 ssh_runner.go:195] Run: sudo systemctl restart cri-docker.socket
	I1220 02:14:18.319068   38979 ssh_runner.go:195] Run: sudo systemctl reset-failed cri-docker.service
	I1220 02:14:18.334630   38979 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1220 02:14:18.486080   38979 ssh_runner.go:195] Run: sudo systemctl restart cri-docker.service
	I1220 02:14:18.616715   38979 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service cri-docker.service
	I1220 02:14:18.643324   38979 start.go:543] Will wait 60s for socket path /var/run/cri-dockerd.sock
	I1220 02:14:18.643397   38979 ssh_runner.go:195] Run: stat /var/run/cri-dockerd.sock
	I1220 02:14:18.649921   38979 start.go:564] Will wait 60s for crictl version
	I1220 02:14:18.649987   38979 ssh_runner.go:195] Run: which crictl
	I1220 02:14:18.655062   38979 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I1220 02:14:18.692451   38979 start.go:580] Version:  0.1.0
	RuntimeName:  docker
	RuntimeVersion:  28.5.2
	RuntimeApiVersion:  v1
	I1220 02:14:18.692517   38979 ssh_runner.go:195] Run: docker version --format {{.Server.Version}}
	I1220 02:14:18.725655   38979 ssh_runner.go:195] Run: docker version --format {{.Server.Version}}
	
	
	==> Docker <==
	Dec 20 02:13:25 default-k8s-diff-port-032958 cri-dockerd[1567]: time="2025-12-20T02:13:25Z" level=info msg="Will attempt to re-write config file /var/lib/docker/containers/24384c9b6386768f183a17a14b0915b4c06115ceca79b379c9a8caeb87ac9be2/resolv.conf as [nameserver 10.96.0.10 search kubernetes-dashboard.svc.cluster.local svc.cluster.local cluster.local options ndots:5]"
	Dec 20 02:13:26 default-k8s-diff-port-032958 dockerd[1117]: time="2025-12-20T02:13:26.077359482Z" level=warning msg="reference for unknown type: " digest="sha256:2e500d29e9d5f4a086b908eb8dfe7ecac57d2ab09d65b24f588b1d449841ef93" remote="docker.io/kubernetesui/dashboard@sha256:2e500d29e9d5f4a086b908eb8dfe7ecac57d2ab09d65b24f588b1d449841ef93"
	Dec 20 02:13:33 default-k8s-diff-port-032958 cri-dockerd[1567]: time="2025-12-20T02:13:33Z" level=info msg="Stop pulling image docker.io/kubernetesui/dashboard:v2.7.0@sha256:2e500d29e9d5f4a086b908eb8dfe7ecac57d2ab09d65b24f588b1d449841ef93: Status: Downloaded newer image for kubernetesui/dashboard@sha256:2e500d29e9d5f4a086b908eb8dfe7ecac57d2ab09d65b24f588b1d449841ef93"
	Dec 20 02:13:33 default-k8s-diff-port-032958 dockerd[1117]: time="2025-12-20T02:13:33.166995649Z" level=warning msg="reference for unknown type: application/vnd.docker.distribution.manifest.v1+prettyjws" digest="sha256:5d99aa1120524c801bc8c1a7077e8f5ec122ba16b6dda1a5d3826057f67b9bcb" remote="registry.k8s.io/echoserver:1.4"
	Dec 20 02:13:33 default-k8s-diff-port-032958 dockerd[1117]: time="2025-12-20T02:13:33.247637303Z" level=warning msg="Error persisting manifest" digest="sha256:5d99aa1120524c801bc8c1a7077e8f5ec122ba16b6dda1a5d3826057f67b9bcb" error="error committing manifest to content store: commit failed: unexpected commit digest sha256:eaee4c452b076cdb05b391ed7e75e1ad0aca136665875ab5d7e2f3d9f4675769, expected sha256:5d99aa1120524c801bc8c1a7077e8f5ec122ba16b6dda1a5d3826057f67b9bcb: failed precondition" remote="registry.k8s.io/echoserver:1.4"
	Dec 20 02:13:33 default-k8s-diff-port-032958 dockerd[1117]: time="2025-12-20T02:13:33.247742747Z" level=info msg="Attempting next endpoint for pull after error: Docker Image Format v1 and Docker Image manifest version 2, schema 1 support has been removed. Suggest the author of registry.k8s.io/echoserver:1.4 to upgrade the image to the OCI Format or Docker Image manifest v2, schema 2. More information at https://docs.docker.com/go/deprecated-image-specs/"
	Dec 20 02:13:33 default-k8s-diff-port-032958 cri-dockerd[1567]: time="2025-12-20T02:13:33Z" level=info msg="Stop pulling image registry.k8s.io/echoserver:1.4: 1.4: Pulling from echoserver"
	Dec 20 02:13:33 default-k8s-diff-port-032958 dockerd[1117]: time="2025-12-20T02:13:33.870943978Z" level=warning msg="Error getting v2 registry: Get \"https://fake.domain/v2/\": dial tcp: lookup fake.domain on 192.168.122.1:53: no such host"
	Dec 20 02:13:33 default-k8s-diff-port-032958 dockerd[1117]: time="2025-12-20T02:13:33.870972001Z" level=info msg="Attempting next endpoint for pull after error: Get \"https://fake.domain/v2/\": dial tcp: lookup fake.domain on 192.168.122.1:53: no such host"
	Dec 20 02:13:33 default-k8s-diff-port-032958 dockerd[1117]: time="2025-12-20T02:13:33.874954248Z" level=error msg="unexpected HTTP error handling" error="<nil>"
	Dec 20 02:13:33 default-k8s-diff-port-032958 dockerd[1117]: time="2025-12-20T02:13:33.875104860Z" level=error msg="Handler for POST /v1.46/images/create returned error: Get \"https://fake.domain/v2/\": dial tcp: lookup fake.domain on 192.168.122.1:53: no such host"
	Dec 20 02:13:46 default-k8s-diff-port-032958 dockerd[1117]: time="2025-12-20T02:13:46.013938388Z" level=error msg="Handler for POST /v1.51/containers/e389ed009c41/pause returned error: cannot pause container e389ed009c414813f08a16331049a1f7b81ae99102e1d3eee00456652f70d78e: OCI runtime pause failed: container not running"
	Dec 20 02:13:46 default-k8s-diff-port-032958 dockerd[1117]: time="2025-12-20T02:13:46.096234565Z" level=info msg="ignoring event" container=e389ed009c414813f08a16331049a1f7b81ae99102e1d3eee00456652f70d78e module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Dec 20 02:14:19 default-k8s-diff-port-032958 cri-dockerd[1567]: time="2025-12-20T02:14:19Z" level=error msg="error getting RW layer size for container ID 'f14a7d35a9c218a36064019d8d70cd5e2dc10c8fff7e745b9c07943ea6e37833': Error response from daemon: No such container: f14a7d35a9c218a36064019d8d70cd5e2dc10c8fff7e745b9c07943ea6e37833"
	Dec 20 02:14:19 default-k8s-diff-port-032958 cri-dockerd[1567]: time="2025-12-20T02:14:19Z" level=error msg="Set backoffDuration to : 1m0s for container ID 'f14a7d35a9c218a36064019d8d70cd5e2dc10c8fff7e745b9c07943ea6e37833'"
	Dec 20 02:14:20 default-k8s-diff-port-032958 cri-dockerd[1567]: time="2025-12-20T02:14:20Z" level=info msg="Failed to read pod IP from plugin/docker: networkPlugin cni failed on the status hook for pod \"coredns-66bc5c9577-j9fnc_kube-system\": CNI failed to retrieve network namespace path: cannot find network namespace for the terminated container \"c17f03aae9a804c2000dd7a7f2df0a5c0e11cb7cc45d2898ceeb917e335ab8a6\""
	Dec 20 02:14:20 default-k8s-diff-port-032958 cri-dockerd[1567]: time="2025-12-20T02:14:20Z" level=info msg="Docker cri received runtime config &RuntimeConfig{NetworkConfig:&NetworkConfig{PodCidr:10.244.0.0/24,},}"
	Dec 20 02:14:21 default-k8s-diff-port-032958 dockerd[1117]: time="2025-12-20T02:14:21.054814750Z" level=warning msg="reference for unknown type: application/vnd.docker.distribution.manifest.v1+prettyjws" digest="sha256:5d99aa1120524c801bc8c1a7077e8f5ec122ba16b6dda1a5d3826057f67b9bcb" remote="registry.k8s.io/echoserver:1.4"
	Dec 20 02:14:21 default-k8s-diff-port-032958 dockerd[1117]: time="2025-12-20T02:14:21.173663258Z" level=warning msg="Error persisting manifest" digest="sha256:5d99aa1120524c801bc8c1a7077e8f5ec122ba16b6dda1a5d3826057f67b9bcb" error="error committing manifest to content store: commit failed: unexpected commit digest sha256:eaee4c452b076cdb05b391ed7e75e1ad0aca136665875ab5d7e2f3d9f4675769, expected sha256:5d99aa1120524c801bc8c1a7077e8f5ec122ba16b6dda1a5d3826057f67b9bcb: failed precondition" remote="registry.k8s.io/echoserver:1.4"
	Dec 20 02:14:21 default-k8s-diff-port-032958 dockerd[1117]: time="2025-12-20T02:14:21.173805989Z" level=info msg="Attempting next endpoint for pull after error: Docker Image Format v1 and Docker Image manifest version 2, schema 1 support has been removed. Suggest the author of registry.k8s.io/echoserver:1.4 to upgrade the image to the OCI Format or Docker Image manifest v2, schema 2. More information at https://docs.docker.com/go/deprecated-image-specs/"
	Dec 20 02:14:21 default-k8s-diff-port-032958 cri-dockerd[1567]: time="2025-12-20T02:14:21Z" level=info msg="Stop pulling image registry.k8s.io/echoserver:1.4: 1.4: Pulling from echoserver"
	Dec 20 02:14:21 default-k8s-diff-port-032958 dockerd[1117]: time="2025-12-20T02:14:21.210054061Z" level=warning msg="Error getting v2 registry: Get \"https://fake.domain/v2/\": dial tcp: lookup fake.domain on 192.168.122.1:53: no such host"
	Dec 20 02:14:21 default-k8s-diff-port-032958 dockerd[1117]: time="2025-12-20T02:14:21.210106510Z" level=info msg="Attempting next endpoint for pull after error: Get \"https://fake.domain/v2/\": dial tcp: lookup fake.domain on 192.168.122.1:53: no such host"
	Dec 20 02:14:21 default-k8s-diff-port-032958 dockerd[1117]: time="2025-12-20T02:14:21.216155700Z" level=error msg="unexpected HTTP error handling" error="<nil>"
	Dec 20 02:14:21 default-k8s-diff-port-032958 dockerd[1117]: time="2025-12-20T02:14:21.216230216Z" level=error msg="Handler for POST /v1.46/images/create returned error: Get \"https://fake.domain/v2/\": dial tcp: lookup fake.domain on 192.168.122.1:53: no such host"
	
	
	==> container status <==
	CONTAINER           IMAGE                                                                                                 CREATED              STATE               NAME                      ATTEMPT             POD ID              POD                                                    NAMESPACE
	db82439a82773       6e38f40d628db                                                                                         1 second ago         Running             storage-provisioner       2                   b98cac4df9b58       storage-provisioner                                    kube-system
	3d0dc5e4eaf53       kubernetesui/dashboard@sha256:2e500d29e9d5f4a086b908eb8dfe7ecac57d2ab09d65b24f588b1d449841ef93        48 seconds ago       Running             kubernetes-dashboard      0                   c7214caee965e       kubernetes-dashboard-855c9754f9-v5f62                  kubernetes-dashboard
	bd3af300e51d6       56cc512116c8f                                                                                         58 seconds ago       Running             busybox                   1                   620275c9345e0       busybox                                                default
	c9a7560c3855f       52546a367cc9e                                                                                         58 seconds ago       Running             coredns                   1                   bd05cab39e53f       coredns-66bc5c9577-gjmjk                               kube-system
	e389ed009c414       6e38f40d628db                                                                                         About a minute ago   Exited              storage-provisioner       1                   b98cac4df9b58       storage-provisioner                                    kube-system
	8a1598184096c       36eef8e07bdd6                                                                                         About a minute ago   Running             kube-proxy                1                   fceaaba1c1db3       kube-proxy-22tlj                                       kube-system
	2808d78b661f8       aec12dadf56dd                                                                                         About a minute ago   Running             kube-scheduler            1                   6d3fddf7afe4b       kube-scheduler-default-k8s-diff-port-032958            kube-system
	5d487135b34c5       a3e246e9556e9                                                                                         About a minute ago   Running             etcd                      1                   57ad4b77ed607       etcd-default-k8s-diff-port-032958                      kube-system
	0be7d44211125       5826b25d990d7                                                                                         About a minute ago   Running             kube-controller-manager   1                   f7e02a8a528fa       kube-controller-manager-default-k8s-diff-port-032958   kube-system
	799ae6e77e4dc       aa27095f56193                                                                                         About a minute ago   Running             kube-apiserver            1                   c0277aff9f306       kube-apiserver-default-k8s-diff-port-032958            kube-system
	9a4671ba050b2       gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e   2 minutes ago        Exited              busybox                   0                   9bfc558dcff48       busybox                                                default
	aef0cd5a3775d       52546a367cc9e                                                                                         2 minutes ago        Exited              coredns                   0                   4e8574a6b885b       coredns-66bc5c9577-gjmjk                               kube-system
	696c72bae65f2       36eef8e07bdd6                                                                                         2 minutes ago        Exited              kube-proxy                0                   959487a2071a7       kube-proxy-22tlj                                       kube-system
	37cee352777b9       aa27095f56193                                                                                         3 minutes ago        Exited              kube-apiserver            0                   042ea7540f943       kube-apiserver-default-k8s-diff-port-032958            kube-system
	6955eb7dbb7a8       a3e246e9556e9                                                                                         3 minutes ago        Exited              etcd                      0                   1ae4fd44c2900       etcd-default-k8s-diff-port-032958                      kube-system
	bc3e91d6c19d6       5826b25d990d7                                                                                         3 minutes ago        Exited              kube-controller-manager   0                   ec2c7b618f7f7       kube-controller-manager-default-k8s-diff-port-032958   kube-system
	44fb178dfab72       aec12dadf56dd                                                                                         3 minutes ago        Exited              kube-scheduler            0                   010ff1a843791       kube-scheduler-default-k8s-diff-port-032958            kube-system
	
	
	==> coredns [aef0cd5a3775] <==
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[WARNING] plugin/kubernetes: starting server with unsynced Kubernetes API
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = 1b226df79860026c6a52e67daa10d7f0d57ec5b023288ec00c5e05f93523c894564e15b91770d3a07ae1cfbe861d15b37d4a0027e69c546ab112970993a3b03b
	CoreDNS-1.12.1
	linux/amd64, go1.24.1, 707c7c1
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.32.3/tools/cache/reflector.go:251: failed to list *v1.Namespace: Get "https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: i/o timeout
	[ERROR] plugin/kubernetes: Unhandled Error
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.32.3/tools/cache/reflector.go:251: failed to list *v1.EndpointSlice: Get "https://10.96.0.1:443/apis/discovery.k8s.io/v1/endpointslices?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: i/o timeout
	[ERROR] plugin/kubernetes: Unhandled Error
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.32.3/tools/cache/reflector.go:251: failed to list *v1.Service: Get "https://10.96.0.1:443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: i/o timeout
	[ERROR] plugin/kubernetes: Unhandled Error
	[INFO] Reloading
	[INFO] plugin/reload: Running configuration SHA512 = ecad3ac8c72227dcf0d7a418ea5051ee155dd74d241a13c4787cc61906568517b5647c8519c78ef2c6b724422ee4b03d6cfb27e9a87140163726e83184faf782
	[INFO] Reloading complete
	[INFO] SIGTERM: Shutting down servers then terminating
	[INFO] plugin/health: Going into lameduck mode for 5s
	
	
	==> coredns [c9a7560c3855] <==
	maxprocs: Leaving GOMAXPROCS=2: CPU quota undefined
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = ecad3ac8c72227dcf0d7a418ea5051ee155dd74d241a13c4787cc61906568517b5647c8519c78ef2c6b724422ee4b03d6cfb27e9a87140163726e83184faf782
	CoreDNS-1.12.1
	linux/amd64, go1.24.1, 707c7c1
	[INFO] 127.0.0.1:39548 - 58159 "HINFO IN 6794078486954714189.4770737732440681574. udp 57 false 512" NXDOMAIN qr,rd,ra 132 0.045655293s
	
	
	==> describe nodes <==
	Name:               default-k8s-diff-port-032958
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=default-k8s-diff-port-032958
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=7cd9f41b7421760cf1f1eaa8725bdb975037b06d
	                    minikube.k8s.io/name=default-k8s-diff-port-032958
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2025_12_20T02_11_24_0700
	                    minikube.k8s.io/version=v1.37.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Sat, 20 Dec 2025 02:11:20 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  default-k8s-diff-port-032958
	  AcquireTime:     <unset>
	  RenewTime:       Sat, 20 Dec 2025 02:14:19 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Sat, 20 Dec 2025 02:14:20 +0000   Sat, 20 Dec 2025 02:11:18 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Sat, 20 Dec 2025 02:14:20 +0000   Sat, 20 Dec 2025 02:11:18 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Sat, 20 Dec 2025 02:14:20 +0000   Sat, 20 Dec 2025 02:11:18 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Sat, 20 Dec 2025 02:14:20 +0000   Sat, 20 Dec 2025 02:13:24 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.83.139
	  Hostname:    default-k8s-diff-port-032958
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             3035908Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             3035908Ki
	  pods:               110
	System Info:
	  Machine ID:                 a22ece73f0a74620b511d2c9063270d7
	  System UUID:                a22ece73-f0a7-4620-b511-d2c9063270d7
	  Boot ID:                    3a1ecf6e-4165-4ac3-94cb-43972902c57c
	  Kernel Version:             6.6.95
	  OS Image:                   Buildroot 2025.02
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  docker://28.5.2
	  Kubelet Version:            v1.34.3
	  Kube-Proxy Version:         
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (11 in total)
	  Namespace                   Name                                                    CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                                    ------------  ----------  ---------------  -------------  ---
	  default                     busybox                                                 0 (0%)        0 (0%)      0 (0%)           0 (0%)         2m17s
	  kube-system                 coredns-66bc5c9577-gjmjk                                100m (5%)     0 (0%)      70Mi (2%)        170Mi (5%)     2m53s
	  kube-system                 etcd-default-k8s-diff-port-032958                       100m (5%)     0 (0%)      100Mi (3%)       0 (0%)         2m57s
	  kube-system                 kube-apiserver-default-k8s-diff-port-032958             250m (12%)    0 (0%)      0 (0%)           0 (0%)         2m58s
	  kube-system                 kube-controller-manager-default-k8s-diff-port-032958    200m (10%)    0 (0%)      0 (0%)           0 (0%)         2m57s
	  kube-system                 kube-proxy-22tlj                                        0 (0%)        0 (0%)      0 (0%)           0 (0%)         2m53s
	  kube-system                 kube-scheduler-default-k8s-diff-port-032958             100m (5%)     0 (0%)      0 (0%)           0 (0%)         2m57s
	  kube-system                 metrics-server-746fcd58dc-r9hzl                         100m (5%)     0 (0%)      200Mi (6%)       0 (0%)         2m7s
	  kube-system                 storage-provisioner                                     0 (0%)        0 (0%)      0 (0%)           0 (0%)         2m50s
	  kubernetes-dashboard        dashboard-metrics-scraper-6ffb444bf9-wzcc7              0 (0%)        0 (0%)      0 (0%)           0 (0%)         62s
	  kubernetes-dashboard        kubernetes-dashboard-855c9754f9-v5f62                   0 (0%)        0 (0%)      0 (0%)           0 (0%)         62s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests     Limits
	  --------           --------     ------
	  cpu                850m (42%)   0 (0%)
	  memory             370Mi (12%)  170Mi (5%)
	  ephemeral-storage  0 (0%)       0 (0%)
	  hugepages-2Mi      0 (0%)       0 (0%)
	Events:
	  Type     Reason                   Age                  From             Message
	  ----     ------                   ----                 ----             -------
	  Normal   Starting                 2m51s                kube-proxy       
	  Normal   Starting                 65s                  kube-proxy       
	  Normal   NodeAllocatableEnforced  3m5s                 kubelet          Updated Node Allocatable limit across pods
	  Normal   NodeHasSufficientMemory  3m5s (x8 over 3m5s)  kubelet          Node default-k8s-diff-port-032958 status is now: NodeHasSufficientMemory
	  Normal   NodeHasNoDiskPressure    3m5s (x8 over 3m5s)  kubelet          Node default-k8s-diff-port-032958 status is now: NodeHasNoDiskPressure
	  Normal   NodeHasSufficientPID     3m5s (x7 over 3m5s)  kubelet          Node default-k8s-diff-port-032958 status is now: NodeHasSufficientPID
	  Normal   Starting                 3m5s                 kubelet          Starting kubelet.
	  Normal   Starting                 2m58s                kubelet          Starting kubelet.
	  Normal   NodeAllocatableEnforced  2m58s                kubelet          Updated Node Allocatable limit across pods
	  Normal   NodeHasSufficientMemory  2m57s                kubelet          Node default-k8s-diff-port-032958 status is now: NodeHasSufficientMemory
	  Normal   NodeHasNoDiskPressure    2m57s                kubelet          Node default-k8s-diff-port-032958 status is now: NodeHasNoDiskPressure
	  Normal   NodeHasSufficientPID     2m57s                kubelet          Node default-k8s-diff-port-032958 status is now: NodeHasSufficientPID
	  Normal   RegisteredNode           2m54s                node-controller  Node default-k8s-diff-port-032958 event: Registered Node default-k8s-diff-port-032958 in Controller
	  Normal   NodeReady                2m53s                kubelet          Node default-k8s-diff-port-032958 status is now: NodeReady
	  Normal   Starting                 72s                  kubelet          Starting kubelet.
	  Normal   NodeHasSufficientMemory  72s (x8 over 72s)    kubelet          Node default-k8s-diff-port-032958 status is now: NodeHasSufficientMemory
	  Normal   NodeHasNoDiskPressure    72s (x8 over 72s)    kubelet          Node default-k8s-diff-port-032958 status is now: NodeHasNoDiskPressure
	  Normal   NodeHasSufficientPID     72s (x7 over 72s)    kubelet          Node default-k8s-diff-port-032958 status is now: NodeHasSufficientPID
	  Normal   NodeAllocatableEnforced  72s                  kubelet          Updated Node Allocatable limit across pods
	  Warning  Rebooted                 67s                  kubelet          Node default-k8s-diff-port-032958 has been rebooted, boot id: 3a1ecf6e-4165-4ac3-94cb-43972902c57c
	  Normal   RegisteredNode           63s                  node-controller  Node default-k8s-diff-port-032958 event: Registered Node default-k8s-diff-port-032958 in Controller
	  Normal   Starting                 2s                   kubelet          Starting kubelet.
	  Normal   NodeAllocatableEnforced  1s                   kubelet          Updated Node Allocatable limit across pods
	  Normal   NodeHasSufficientMemory  1s                   kubelet          Node default-k8s-diff-port-032958 status is now: NodeHasSufficientMemory
	  Normal   NodeHasNoDiskPressure    1s                   kubelet          Node default-k8s-diff-port-032958 status is now: NodeHasNoDiskPressure
	  Normal   NodeHasSufficientPID     1s                   kubelet          Node default-k8s-diff-port-032958 status is now: NodeHasSufficientPID
	
	
	==> dmesg <==
	[Dec20 02:12] Booted with the nomodeset parameter. Only the system framebuffer will be available
	[  +0.000011] acpi PNP0A03:00: fail to add MMCONFIG information, can't access extended configuration space under this bridge
	[  +0.000038] platform regulatory.0: Direct firmware load for regulatory.db failed with error -2
	[  +0.003240] (rpcbind)[120]: rpcbind.service: Referenced but unset environment variable evaluates to an empty string: RPCBIND_OPTIONS
	[  +0.994669] NFSD: Using /var/lib/nfs/v4recovery as the NFSv4 state recovery directory
	[  +0.000027] NFSD: unable to find recovery directory /var/lib/nfs/v4recovery
	[  +0.000002] NFSD: Unable to initialize client recovery tracking! (-2)
	[  +0.151734] kauditd_printk_skb: 1 callbacks suppressed
	[Dec20 02:13] kauditd_printk_skb: 393 callbacks suppressed
	[  +0.106540] kauditd_printk_skb: 46 callbacks suppressed
	[  +5.723521] kauditd_printk_skb: 165 callbacks suppressed
	[  +3.591601] kauditd_printk_skb: 134 callbacks suppressed
	[  +0.607561] kauditd_printk_skb: 259 callbacks suppressed
	[  +0.307946] kauditd_printk_skb: 17 callbacks suppressed
	[Dec20 02:14] kauditd_printk_skb: 35 callbacks suppressed
	
	
	==> etcd [5d487135b34c] <==
	{"level":"warn","ts":"2025-12-20T02:13:13.342109Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:50660","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-20T02:13:13.352027Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:50684","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-20T02:13:13.369107Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:50704","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-20T02:13:13.385882Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:50724","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-20T02:13:13.394142Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:50748","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-20T02:13:13.401109Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:50766","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-20T02:13:13.409984Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:50786","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-20T02:13:13.418468Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:50796","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-20T02:13:13.428572Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:50810","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-20T02:13:13.434912Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:50852","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-20T02:13:13.445522Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:50868","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-20T02:13:13.454332Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:50884","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-20T02:13:13.465435Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:50906","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-20T02:13:13.483111Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:50914","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-20T02:13:13.490648Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:50946","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-20T02:13:13.499438Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:50956","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-20T02:13:13.572659Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:50980","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-20T02:13:29.160598Z","caller":"txn/util.go:93","msg":"apply request took too long","took":"521.564224ms","expected-duration":"100ms","prefix":"","request":"header:<ID:13052446816451747392 > lease_revoke:<id:35239b39868e0a7a>","response":"size:28"}
	{"level":"info","ts":"2025-12-20T02:13:29.161474Z","caller":"traceutil/trace.go:172","msg":"trace[1265080339] linearizableReadLoop","detail":"{readStateIndex:772; appliedIndex:771; }","duration":"411.486582ms","start":"2025-12-20T02:13:28.749972Z","end":"2025-12-20T02:13:29.161458Z","steps":["trace[1265080339] 'read index received'  (duration: 33.594µs)","trace[1265080339] 'applied index is now lower than readState.Index'  (duration: 411.451844ms)"],"step_count":2}
	{"level":"warn","ts":"2025-12-20T02:13:29.161591Z","caller":"txn/util.go:93","msg":"apply request took too long","took":"411.631732ms","expected-duration":"100ms","prefix":"read-only range ","request":"limit:1 keys_only:true ","response":"range_response_count:0 size:5"}
	{"level":"info","ts":"2025-12-20T02:13:29.161613Z","caller":"traceutil/trace.go:172","msg":"trace[1600534378] range","detail":"{range_begin:; range_end:; response_count:0; response_revision:727; }","duration":"411.662139ms","start":"2025-12-20T02:13:28.749943Z","end":"2025-12-20T02:13:29.161605Z","steps":["trace[1600534378] 'agreement among raft nodes before linearized reading'  (duration: 411.61436ms)"],"step_count":1}
	{"level":"warn","ts":"2025-12-20T02:13:29.162719Z","caller":"txn/util.go:93","msg":"apply request took too long","took":"311.7284ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/pods/kube-system/kube-scheduler-default-k8s-diff-port-032958\" limit:1 ","response":"range_response_count:1 size:5168"}
	{"level":"info","ts":"2025-12-20T02:13:29.163046Z","caller":"traceutil/trace.go:172","msg":"trace[1971843700] range","detail":"{range_begin:/registry/pods/kube-system/kube-scheduler-default-k8s-diff-port-032958; range_end:; response_count:1; response_revision:727; }","duration":"312.095123ms","start":"2025-12-20T02:13:28.850939Z","end":"2025-12-20T02:13:29.163034Z","steps":["trace[1971843700] 'agreement among raft nodes before linearized reading'  (duration: 311.117462ms)"],"step_count":1}
	{"level":"warn","ts":"2025-12-20T02:13:29.163083Z","caller":"v3rpc/interceptor.go:202","msg":"request stats","start time":"2025-12-20T02:13:28.850904Z","time spent":"312.166241ms","remote":"127.0.0.1:50178","response type":"/etcdserverpb.KV/Range","request count":0,"request size":74,"response count":1,"response size":5191,"request content":"key:\"/registry/pods/kube-system/kube-scheduler-default-k8s-diff-port-032958\" limit:1 "}
	{"level":"info","ts":"2025-12-20T02:13:30.222290Z","caller":"traceutil/trace.go:172","msg":"trace[252289306] transaction","detail":"{read_only:false; response_revision:728; number_of_response:1; }","duration":"269.402974ms","start":"2025-12-20T02:13:29.952867Z","end":"2025-12-20T02:13:30.222270Z","steps":["trace[252289306] 'process raft request'  (duration: 269.235053ms)"],"step_count":1}
	
	
	==> etcd [6955eb7dbb7a] <==
	{"level":"warn","ts":"2025-12-20T02:11:19.847959Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:60060","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-20T02:11:19.949918Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:60078","server-name":"","error":"EOF"}
	{"level":"info","ts":"2025-12-20T02:12:07.244895Z","caller":"traceutil/trace.go:172","msg":"trace[798266364] linearizableReadLoop","detail":"{readStateIndex:511; appliedIndex:511; }","duration":"213.060794ms","start":"2025-12-20T02:12:07.031794Z","end":"2025-12-20T02:12:07.244854Z","steps":["trace[798266364] 'read index received'  (duration: 213.055389ms)","trace[798266364] 'applied index is now lower than readState.Index'  (duration: 4.109µs)"],"step_count":2}
	{"level":"info","ts":"2025-12-20T02:12:07.245037Z","caller":"traceutil/trace.go:172","msg":"trace[286472680] transaction","detail":"{read_only:false; response_revision:494; number_of_response:1; }","duration":"297.72601ms","start":"2025-12-20T02:12:06.947300Z","end":"2025-12-20T02:12:07.245026Z","steps":["trace[286472680] 'process raft request'  (duration: 297.578574ms)"],"step_count":1}
	{"level":"warn","ts":"2025-12-20T02:12:07.245042Z","caller":"txn/util.go:93","msg":"apply request took too long","took":"213.193567ms","expected-duration":"100ms","prefix":"read-only range ","request":"limit:1 keys_only:true ","response":"range_response_count:0 size:5"}
	{"level":"info","ts":"2025-12-20T02:12:07.245100Z","caller":"traceutil/trace.go:172","msg":"trace[1257312239] range","detail":"{range_begin:; range_end:; response_count:0; response_revision:493; }","duration":"213.303974ms","start":"2025-12-20T02:12:07.031787Z","end":"2025-12-20T02:12:07.245091Z","steps":["trace[1257312239] 'agreement among raft nodes before linearized reading'  (duration: 213.173447ms)"],"step_count":1}
	{"level":"warn","ts":"2025-12-20T02:12:08.425636Z","caller":"txn/util.go:93","msg":"apply request took too long","took":"189.814071ms","expected-duration":"100ms","prefix":"","request":"header:<ID:13052446816422726242 > lease_revoke:<id:35239b39868e09cb>","response":"size:28"}
	{"level":"info","ts":"2025-12-20T02:12:15.681646Z","caller":"osutil/interrupt_unix.go:65","msg":"received signal; shutting down","signal":"terminated"}
	{"level":"info","ts":"2025-12-20T02:12:15.681764Z","caller":"embed/etcd.go:426","msg":"closing etcd server","name":"default-k8s-diff-port-032958","data-dir":"/var/lib/minikube/etcd","advertise-peer-urls":["https://192.168.83.139:2380"],"advertise-client-urls":["https://192.168.83.139:2379"]}
	{"level":"error","ts":"2025-12-20T02:12:15.681878Z","caller":"embed/etcd.go:912","msg":"setting up serving from embedded etcd failed.","error":"http: Server closed","stacktrace":"go.etcd.io/etcd/server/v3/embed.(*Etcd).errHandler\n\tgo.etcd.io/etcd/server/v3/embed/etcd.go:912\ngo.etcd.io/etcd/server/v3/embed.(*serveCtx).startHandler.func1\n\tgo.etcd.io/etcd/server/v3/embed/serve.go:90"}
	{"level":"error","ts":"2025-12-20T02:12:22.684233Z","caller":"embed/etcd.go:912","msg":"setting up serving from embedded etcd failed.","error":"http: Server closed","stacktrace":"go.etcd.io/etcd/server/v3/embed.(*Etcd).errHandler\n\tgo.etcd.io/etcd/server/v3/embed/etcd.go:912\ngo.etcd.io/etcd/server/v3/embed.(*serveCtx).startHandler.func1\n\tgo.etcd.io/etcd/server/v3/embed/serve.go:90"}
	{"level":"error","ts":"2025-12-20T02:12:22.686809Z","caller":"embed/etcd.go:912","msg":"setting up serving from embedded etcd failed.","error":"accept tcp 127.0.0.1:2381: use of closed network connection","stacktrace":"go.etcd.io/etcd/server/v3/embed.(*Etcd).errHandler\n\tgo.etcd.io/etcd/server/v3/embed/etcd.go:912\ngo.etcd.io/etcd/server/v3/embed.(*Etcd).startHandler.func1\n\tgo.etcd.io/etcd/server/v3/embed/etcd.go:906"}
	{"level":"info","ts":"2025-12-20T02:12:22.686860Z","caller":"etcdserver/server.go:1297","msg":"skipped leadership transfer for single voting member cluster","local-member-id":"911810311894b523","current-leader-member-id":"911810311894b523"}
	{"level":"info","ts":"2025-12-20T02:12:22.687961Z","caller":"etcdserver/server.go:2358","msg":"server has stopped; stopping storage version's monitor"}
	{"level":"info","ts":"2025-12-20T02:12:22.688006Z","caller":"etcdserver/server.go:2335","msg":"server has stopped; stopping cluster version's monitor"}
	{"level":"warn","ts":"2025-12-20T02:12:22.691490Z","caller":"embed/serve.go:245","msg":"stopping secure grpc server due to error","error":"accept tcp 127.0.0.1:2379: use of closed network connection"}
	{"level":"warn","ts":"2025-12-20T02:12:22.691626Z","caller":"embed/serve.go:247","msg":"stopped secure grpc server due to error","error":"accept tcp 127.0.0.1:2379: use of closed network connection"}
	{"level":"error","ts":"2025-12-20T02:12:22.691850Z","caller":"embed/etcd.go:912","msg":"setting up serving from embedded etcd failed.","error":"accept tcp 127.0.0.1:2379: use of closed network connection","stacktrace":"go.etcd.io/etcd/server/v3/embed.(*Etcd).errHandler\n\tgo.etcd.io/etcd/server/v3/embed/etcd.go:912\ngo.etcd.io/etcd/server/v3/embed.(*Etcd).startHandler.func1\n\tgo.etcd.io/etcd/server/v3/embed/etcd.go:906"}
	{"level":"warn","ts":"2025-12-20T02:12:22.692143Z","caller":"embed/serve.go:245","msg":"stopping secure grpc server due to error","error":"accept tcp 192.168.83.139:2379: use of closed network connection"}
	{"level":"warn","ts":"2025-12-20T02:12:22.692250Z","caller":"embed/serve.go:247","msg":"stopped secure grpc server due to error","error":"accept tcp 192.168.83.139:2379: use of closed network connection"}
	{"level":"error","ts":"2025-12-20T02:12:22.692290Z","caller":"embed/etcd.go:912","msg":"setting up serving from embedded etcd failed.","error":"accept tcp 192.168.83.139:2379: use of closed network connection","stacktrace":"go.etcd.io/etcd/server/v3/embed.(*Etcd).errHandler\n\tgo.etcd.io/etcd/server/v3/embed/etcd.go:912\ngo.etcd.io/etcd/server/v3/embed.(*Etcd).startHandler.func1\n\tgo.etcd.io/etcd/server/v3/embed/etcd.go:906"}
	{"level":"info","ts":"2025-12-20T02:12:22.695968Z","caller":"embed/etcd.go:621","msg":"stopping serving peer traffic","address":"192.168.83.139:2380"}
	{"level":"error","ts":"2025-12-20T02:12:22.696039Z","caller":"embed/etcd.go:912","msg":"setting up serving from embedded etcd failed.","error":"accept tcp 192.168.83.139:2380: use of closed network connection","stacktrace":"go.etcd.io/etcd/server/v3/embed.(*Etcd).errHandler\n\tgo.etcd.io/etcd/server/v3/embed/etcd.go:912\ngo.etcd.io/etcd/server/v3/embed.(*Etcd).startHandler.func1\n\tgo.etcd.io/etcd/server/v3/embed/etcd.go:906"}
	{"level":"info","ts":"2025-12-20T02:12:22.696144Z","caller":"embed/etcd.go:626","msg":"stopped serving peer traffic","address":"192.168.83.139:2380"}
	{"level":"info","ts":"2025-12-20T02:12:22.696154Z","caller":"embed/etcd.go:428","msg":"closed etcd server","name":"default-k8s-diff-port-032958","data-dir":"/var/lib/minikube/etcd","advertise-peer-urls":["https://192.168.83.139:2380"],"advertise-client-urls":["https://192.168.83.139:2379"]}
	
	
	==> kernel <==
	 02:14:21 up 1 min,  0 users,  load average: 0.75, 0.36, 0.13
	Linux default-k8s-diff-port-032958 6.6.95 #1 SMP PREEMPT_DYNAMIC Wed Dec 17 12:49:57 UTC 2025 x86_64 GNU/Linux
	PRETTY_NAME="Buildroot 2025.02"
	
	
	==> kube-apiserver [37cee352777b] <==
	W1220 02:12:24.853572       1 logging.go:55] [core] [Channel #191 SubChannel #193]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1220 02:12:24.853799       1 logging.go:55] [core] [Channel #227 SubChannel #229]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1220 02:12:24.886374       1 logging.go:55] [core] [Channel #87 SubChannel #89]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1220 02:12:24.979017       1 logging.go:55] [core] [Channel #9 SubChannel #11]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1220 02:12:25.000084       1 logging.go:55] [core] [Channel #207 SubChannel #209]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1220 02:12:25.002681       1 logging.go:55] [core] [Channel #251 SubChannel #253]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1220 02:12:25.055296       1 logging.go:55] [core] [Channel #219 SubChannel #221]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1220 02:12:25.076421       1 logging.go:55] [core] [Channel #75 SubChannel #77]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1220 02:12:25.117381       1 logging.go:55] [core] [Channel #115 SubChannel #117]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1220 02:12:25.121093       1 logging.go:55] [core] [Channel #183 SubChannel #185]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1220 02:12:25.174493       1 logging.go:55] [core] [Channel #139 SubChannel #141]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1220 02:12:25.192366       1 logging.go:55] [core] [Channel #147 SubChannel #149]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1220 02:12:25.290406       1 logging.go:55] [core] [Channel #262 SubChannel #263]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1220 02:12:25.339079       1 logging.go:55] [core] [Channel #103 SubChannel #105]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1220 02:12:25.360110       1 logging.go:55] [core] [Channel #131 SubChannel #133]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1220 02:12:25.370079       1 logging.go:55] [core] [Channel #111 SubChannel #113]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1220 02:12:25.387509       1 logging.go:55] [core] [Channel #199 SubChannel #201]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1220 02:12:25.392242       1 logging.go:55] [core] [Channel #39 SubChannel #41]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1220 02:12:25.418164       1 logging.go:55] [core] [Channel #91 SubChannel #93]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1220 02:12:25.463768       1 logging.go:55] [core] [Channel #119 SubChannel #121]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1220 02:12:25.491271       1 logging.go:55] [core] [Channel #231 SubChannel #233]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1220 02:12:25.551484       1 logging.go:55] [core] [Channel #4 SubChannel #6]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1220 02:12:25.672624       1 logging.go:55] [core] [Channel #171 SubChannel #173]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1220 02:12:25.708145       1 logging.go:55] [core] [Channel #203 SubChannel #205]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1220 02:12:25.738375       1 logging.go:55] [core] [Channel #211 SubChannel #213]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	
	
	==> kube-apiserver [799ae6e77e4d] <==
		, Header: map[Content-Type:[text/plain; charset=utf-8] X-Content-Type-Options:[nosniff]]
	 > logger="UnhandledError"
	I1220 02:13:15.375295       1 controller.go:109] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	E1220 02:13:15.375659       1 controller.go:113] "Unhandled Error" err="loading OpenAPI spec for \"v1beta1.metrics.k8s.io\" failed with: Error, could not get list of group versions for APIService" logger="UnhandledError"
	I1220 02:13:15.376472       1 controller.go:126] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	I1220 02:13:16.429720       1 handler.go:285] Adding GroupVersion metrics.k8s.io v1beta1 to ResourceManager
	I1220 02:13:17.061950       1 controller.go:667] quota admission added evaluator for: deployments.apps
	I1220 02:13:17.106313       1 controller.go:667] quota admission added evaluator for: daemonsets.apps
	I1220 02:13:17.143082       1 controller.go:667] quota admission added evaluator for: roles.rbac.authorization.k8s.io
	I1220 02:13:17.149304       1 controller.go:667] quota admission added evaluator for: rolebindings.rbac.authorization.k8s.io
	I1220 02:13:18.957068       1 controller.go:667] quota admission added evaluator for: endpoints
	I1220 02:13:18.991435       1 controller.go:667] quota admission added evaluator for: endpointslices.discovery.k8s.io
	I1220 02:13:19.164812       1 controller.go:667] quota admission added evaluator for: replicasets.apps
	I1220 02:13:19.310960       1 controller.go:667] quota admission added evaluator for: namespaces
	I1220 02:13:19.735545       1 alloc.go:328] "allocated clusterIPs" service="kubernetes-dashboard/kubernetes-dashboard" clusterIPs={"IPv4":"10.100.251.44"}
	I1220 02:13:19.755566       1 alloc.go:328] "allocated clusterIPs" service="kubernetes-dashboard/dashboard-metrics-scraper" clusterIPs={"IPv4":"10.100.131.142"}
	W1220 02:14:18.888986       1 handler_proxy.go:99] no RequestInfo found in the context
	E1220 02:14:18.889066       1 controller.go:102] "Unhandled Error" err=<
		loading OpenAPI spec for "v1beta1.metrics.k8s.io" failed with: failed to download v1beta1.metrics.k8s.io: failed to retrieve openAPI spec, http error: ResponseCode: 503, Body: service unavailable
		, Header: map[Content-Type:[text/plain; charset=utf-8] X-Content-Type-Options:[nosniff]]
	 > logger="UnhandledError"
	I1220 02:14:18.889082       1 controller.go:109] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	W1220 02:14:18.898096       1 handler_proxy.go:99] no RequestInfo found in the context
	E1220 02:14:18.898161       1 controller.go:113] "Unhandled Error" err="loading OpenAPI spec for \"v1beta1.metrics.k8s.io\" failed with: Error, could not get list of group versions for APIService" logger="UnhandledError"
	I1220 02:14:18.898178       1 controller.go:126] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	
	
	==> kube-controller-manager [0be7d4421112] <==
	I1220 02:13:18.949472       1 shared_informer.go:356] "Caches are synced" controller="PV protection"
	I1220 02:13:18.949789       1 shared_informer.go:356] "Caches are synced" controller="resource_claim"
	I1220 02:13:18.911345       1 shared_informer.go:356] "Caches are synced" controller="GC"
	I1220 02:13:18.951606       1 shared_informer.go:356] "Caches are synced" controller="crt configmap"
	I1220 02:13:18.954539       1 shared_informer.go:356] "Caches are synced" controller="job"
	I1220 02:13:18.928851       1 shared_informer.go:356] "Caches are synced" controller="resource quota"
	I1220 02:13:18.929424       1 shared_informer.go:356] "Caches are synced" controller="deployment"
	I1220 02:13:18.967844       1 shared_informer.go:356] "Caches are synced" controller="resource quota"
	I1220 02:13:18.972782       1 shared_informer.go:356] "Caches are synced" controller="stateful set"
	I1220 02:13:18.972915       1 shared_informer.go:356] "Caches are synced" controller="garbage collector"
	I1220 02:13:18.972963       1 garbagecollector.go:154] "Garbage collector: all resource monitors have synced" logger="garbage-collector-controller"
	I1220 02:13:18.972974       1 garbagecollector.go:157] "Proceeding to collect garbage" logger="garbage-collector-controller"
	I1220 02:13:18.978649       1 shared_informer.go:356] "Caches are synced" controller="garbage collector"
	I1220 02:13:19.003577       1 shared_informer.go:356] "Caches are synced" controller="ClusterRoleAggregator"
	E1220 02:13:19.507058       1 replica_set.go:587] "Unhandled Error" err="sync \"kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9\" failed with pods \"dashboard-metrics-scraper-6ffb444bf9-\" is forbidden: error looking up service account kubernetes-dashboard/kubernetes-dashboard: serviceaccount \"kubernetes-dashboard\" not found" logger="UnhandledError"
	E1220 02:13:19.557712       1 replica_set.go:587] "Unhandled Error" err="sync \"kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9\" failed with pods \"dashboard-metrics-scraper-6ffb444bf9-\" is forbidden: error looking up service account kubernetes-dashboard/kubernetes-dashboard: serviceaccount \"kubernetes-dashboard\" not found" logger="UnhandledError"
	E1220 02:13:19.579564       1 replica_set.go:587] "Unhandled Error" err="sync \"kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9\" failed with pods \"dashboard-metrics-scraper-6ffb444bf9-\" is forbidden: error looking up service account kubernetes-dashboard/kubernetes-dashboard: serviceaccount \"kubernetes-dashboard\" not found" logger="UnhandledError"
	E1220 02:13:19.584826       1 replica_set.go:587] "Unhandled Error" err="sync \"kubernetes-dashboard/kubernetes-dashboard-855c9754f9\" failed with pods \"kubernetes-dashboard-855c9754f9-\" is forbidden: error looking up service account kubernetes-dashboard/kubernetes-dashboard: serviceaccount \"kubernetes-dashboard\" not found" logger="UnhandledError"
	E1220 02:13:19.605680       1 replica_set.go:587] "Unhandled Error" err="sync \"kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9\" failed with pods \"dashboard-metrics-scraper-6ffb444bf9-\" is forbidden: error looking up service account kubernetes-dashboard/kubernetes-dashboard: serviceaccount \"kubernetes-dashboard\" not found" logger="UnhandledError"
	E1220 02:13:19.607173       1 replica_set.go:587] "Unhandled Error" err="sync \"kubernetes-dashboard/kubernetes-dashboard-855c9754f9\" failed with pods \"kubernetes-dashboard-855c9754f9-\" is forbidden: error looking up service account kubernetes-dashboard/kubernetes-dashboard: serviceaccount \"kubernetes-dashboard\" not found" logger="UnhandledError"
	E1220 02:13:19.612896       1 replica_set.go:587] "Unhandled Error" err="sync \"kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9\" failed with pods \"dashboard-metrics-scraper-6ffb444bf9-\" is forbidden: error looking up service account kubernetes-dashboard/kubernetes-dashboard: serviceaccount \"kubernetes-dashboard\" not found" logger="UnhandledError"
	E1220 02:13:19.619443       1 replica_set.go:587] "Unhandled Error" err="sync \"kubernetes-dashboard/kubernetes-dashboard-855c9754f9\" failed with pods \"kubernetes-dashboard-855c9754f9-\" is forbidden: error looking up service account kubernetes-dashboard/kubernetes-dashboard: serviceaccount \"kubernetes-dashboard\" not found" logger="UnhandledError"
	I1220 02:13:28.942844       1 node_lifecycle_controller.go:1044] "Controller detected that some Nodes are Ready. Exiting master disruption mode" logger="node-lifecycle-controller"
	E1220 02:14:18.972742       1 resource_quota_controller.go:446] "Unhandled Error" err="unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1" logger="UnhandledError"
	I1220 02:14:19.019968       1 garbagecollector.go:787] "failed to discover some groups" logger="garbage-collector-controller" groups="map[\"metrics.k8s.io/v1beta1\":\"stale GroupVersion discovery: metrics.k8s.io/v1beta1\"]"
	
	
	==> kube-controller-manager [bc3e91d6c19d] <==
	I1220 02:11:27.795191       1 shared_informer.go:349] "Waiting for caches to sync" controller="cidrallocator"
	I1220 02:11:27.795197       1 shared_informer.go:356] "Caches are synced" controller="cidrallocator"
	I1220 02:11:27.795206       1 shared_informer.go:356] "Caches are synced" controller="endpoint"
	I1220 02:11:27.795417       1 shared_informer.go:356] "Caches are synced" controller="resource_claim"
	I1220 02:11:27.804009       1 shared_informer.go:356] "Caches are synced" controller="service-cidr-controller"
	I1220 02:11:27.809262       1 range_allocator.go:428] "Set node PodCIDR" logger="node-ipam-controller" node="default-k8s-diff-port-032958" podCIDRs=["10.244.0.0/24"]
	I1220 02:11:27.814727       1 shared_informer.go:356] "Caches are synced" controller="PV protection"
	I1220 02:11:27.816005       1 shared_informer.go:356] "Caches are synced" controller="resource quota"
	I1220 02:11:27.818197       1 shared_informer.go:356] "Caches are synced" controller="taint-eviction-controller"
	I1220 02:11:27.827800       1 shared_informer.go:356] "Caches are synced" controller="garbage collector"
	I1220 02:11:27.835424       1 shared_informer.go:356] "Caches are synced" controller="taint"
	I1220 02:11:27.835598       1 node_lifecycle_controller.go:1221] "Initializing eviction metric for zone" logger="node-lifecycle-controller" zone=""
	I1220 02:11:27.835777       1 node_lifecycle_controller.go:873] "Missing timestamp for Node. Assuming now as a timestamp" logger="node-lifecycle-controller" node="default-k8s-diff-port-032958"
	I1220 02:11:27.835796       1 shared_informer.go:356] "Caches are synced" controller="certificate-csrsigning-kubelet-serving"
	I1220 02:11:27.835833       1 node_lifecycle_controller.go:1025] "Controller detected that all Nodes are not-Ready. Entering master disruption mode" logger="node-lifecycle-controller"
	I1220 02:11:27.835932       1 shared_informer.go:356] "Caches are synced" controller="garbage collector"
	I1220 02:11:27.835939       1 garbagecollector.go:154] "Garbage collector: all resource monitors have synced" logger="garbage-collector-controller"
	I1220 02:11:27.835945       1 garbagecollector.go:157] "Proceeding to collect garbage" logger="garbage-collector-controller"
	I1220 02:11:27.838763       1 shared_informer.go:356] "Caches are synced" controller="certificate-csrsigning-kube-apiserver-client"
	I1220 02:11:27.838905       1 shared_informer.go:356] "Caches are synced" controller="certificate-csrsigning-kubelet-client"
	I1220 02:11:27.838920       1 shared_informer.go:356] "Caches are synced" controller="certificate-csrsigning-legacy-unknown"
	I1220 02:11:27.843443       1 shared_informer.go:356] "Caches are synced" controller="persistent volume"
	I1220 02:11:27.845334       1 shared_informer.go:356] "Caches are synced" controller="ClusterRoleAggregator"
	I1220 02:11:27.853111       1 shared_informer.go:356] "Caches are synced" controller="resource quota"
	I1220 02:11:32.836931       1 node_lifecycle_controller.go:1044] "Controller detected that some Nodes are Ready. Exiting master disruption mode" logger="node-lifecycle-controller"
	
	
	==> kube-proxy [696c72bae65f] <==
	I1220 02:11:30.533772       1 shared_informer.go:349] "Waiting for caches to sync" controller="node informer cache"
	I1220 02:11:30.634441       1 shared_informer.go:356] "Caches are synced" controller="node informer cache"
	I1220 02:11:30.634675       1 server.go:219] "Successfully retrieved NodeIPs" NodeIPs=["192.168.83.139"]
	E1220 02:11:30.635231       1 server.go:256] "Kube-proxy configuration may be incomplete or incorrect" err="nodePortAddresses is unset; NodePort connections will be accepted on all local IPs. Consider using `--nodeport-addresses primary`"
	I1220 02:11:30.763679       1 server_linux.go:103] "No iptables support for family" ipFamily="IPv6" error=<
		error listing chain "POSTROUTING" in table "nat": exit status 3: ip6tables v1.8.9 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
		Perhaps ip6tables or your kernel needs to be upgraded.
	 >
	I1220 02:11:30.764391       1 server.go:267] "kube-proxy running in single-stack mode" ipFamily="IPv4"
	I1220 02:11:30.764588       1 server_linux.go:132] "Using iptables Proxier"
	I1220 02:11:30.801765       1 proxier.go:242] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses" ipFamily="IPv4"
	I1220 02:11:30.802104       1 server.go:527] "Version info" version="v1.34.3"
	I1220 02:11:30.802116       1 server.go:529] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1220 02:11:30.821963       1 config.go:309] "Starting node config controller"
	I1220 02:11:30.822050       1 shared_informer.go:349] "Waiting for caches to sync" controller="node config"
	I1220 02:11:30.822061       1 shared_informer.go:356] "Caches are synced" controller="node config"
	I1220 02:11:30.826798       1 config.go:200] "Starting service config controller"
	I1220 02:11:30.826954       1 shared_informer.go:349] "Waiting for caches to sync" controller="service config"
	I1220 02:11:30.829847       1 config.go:106] "Starting endpoint slice config controller"
	I1220 02:11:30.830754       1 shared_informer.go:349] "Waiting for caches to sync" controller="endpoint slice config"
	I1220 02:11:30.831058       1 config.go:403] "Starting serviceCIDR config controller"
	I1220 02:11:30.831070       1 shared_informer.go:349] "Waiting for caches to sync" controller="serviceCIDR config"
	I1220 02:11:30.937234       1 shared_informer.go:356] "Caches are synced" controller="serviceCIDR config"
	I1220 02:11:30.937307       1 shared_informer.go:356] "Caches are synced" controller="endpoint slice config"
	I1220 02:11:30.933586       1 shared_informer.go:356] "Caches are synced" controller="service config"
	
	
	==> kube-proxy [8a1598184096] <==
	I1220 02:13:16.000300       1 shared_informer.go:349] "Waiting for caches to sync" controller="node informer cache"
	I1220 02:13:16.100699       1 shared_informer.go:356] "Caches are synced" controller="node informer cache"
	I1220 02:13:16.100734       1 server.go:219] "Successfully retrieved NodeIPs" NodeIPs=["192.168.83.139"]
	E1220 02:13:16.100793       1 server.go:256] "Kube-proxy configuration may be incomplete or incorrect" err="nodePortAddresses is unset; NodePort connections will be accepted on all local IPs. Consider using `--nodeport-addresses primary`"
	I1220 02:13:16.145133       1 server_linux.go:103] "No iptables support for family" ipFamily="IPv6" error=<
		error listing chain "POSTROUTING" in table "nat": exit status 3: ip6tables v1.8.9 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
		Perhaps ip6tables or your kernel needs to be upgraded.
	 >
	I1220 02:13:16.145459       1 server.go:267] "kube-proxy running in single-stack mode" ipFamily="IPv4"
	I1220 02:13:16.145683       1 server_linux.go:132] "Using iptables Proxier"
	I1220 02:13:16.156575       1 proxier.go:242] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses" ipFamily="IPv4"
	I1220 02:13:16.157810       1 server.go:527] "Version info" version="v1.34.3"
	I1220 02:13:16.158021       1 server.go:529] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1220 02:13:16.162964       1 config.go:200] "Starting service config controller"
	I1220 02:13:16.162999       1 shared_informer.go:349] "Waiting for caches to sync" controller="service config"
	I1220 02:13:16.163014       1 config.go:106] "Starting endpoint slice config controller"
	I1220 02:13:16.163018       1 shared_informer.go:349] "Waiting for caches to sync" controller="endpoint slice config"
	I1220 02:13:16.163027       1 config.go:403] "Starting serviceCIDR config controller"
	I1220 02:13:16.163030       1 shared_informer.go:349] "Waiting for caches to sync" controller="serviceCIDR config"
	I1220 02:13:16.166161       1 config.go:309] "Starting node config controller"
	I1220 02:13:16.166330       1 shared_informer.go:349] "Waiting for caches to sync" controller="node config"
	I1220 02:13:16.166459       1 shared_informer.go:356] "Caches are synced" controller="node config"
	I1220 02:13:16.263219       1 shared_informer.go:356] "Caches are synced" controller="endpoint slice config"
	I1220 02:13:16.263310       1 shared_informer.go:356] "Caches are synced" controller="serviceCIDR config"
	I1220 02:13:16.263327       1 shared_informer.go:356] "Caches are synced" controller="service config"
	
	
	==> kube-scheduler [2808d78b661f] <==
	I1220 02:13:12.086051       1 serving.go:386] Generated self-signed cert in-memory
	I1220 02:13:14.442280       1 server.go:175] "Starting Kubernetes Scheduler" version="v1.34.3"
	I1220 02:13:14.442329       1 server.go:177] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1220 02:13:14.455168       1 requestheader_controller.go:180] Starting RequestHeaderAuthRequestController
	I1220 02:13:14.455457       1 shared_informer.go:349] "Waiting for caches to sync" controller="RequestHeaderAuthRequestController"
	I1220 02:13:14.455686       1 configmap_cafile_content.go:205] "Starting controller" name="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I1220 02:13:14.455746       1 shared_informer.go:349] "Waiting for caches to sync" controller="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I1220 02:13:14.455761       1 configmap_cafile_content.go:205] "Starting controller" name="client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file"
	I1220 02:13:14.455884       1 shared_informer.go:349] "Waiting for caches to sync" controller="client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file"
	I1220 02:13:14.456412       1 secure_serving.go:211] Serving securely on 127.0.0.1:10259
	I1220 02:13:14.456890       1 tlsconfig.go:243] "Starting DynamicServingCertificateController"
	I1220 02:13:14.556446       1 shared_informer.go:356] "Caches are synced" controller="client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file"
	I1220 02:13:14.556930       1 shared_informer.go:356] "Caches are synced" controller="RequestHeaderAuthRequestController"
	I1220 02:13:14.557255       1 shared_informer.go:356] "Caches are synced" controller="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	
	
	==> kube-scheduler [44fb178dfab7] <==
	E1220 02:11:20.961299       1 reflector.go:205] "Failed to watch" err="failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csinodes\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSINode"
	E1220 02:11:20.964924       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User \"system:kube-scheduler\" cannot list resource \"poddisruptionbudgets\" in API group \"policy\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.PodDisruptionBudget"
	E1220 02:11:20.965293       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ResourceSlice: resourceslices.resource.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"resourceslices\" in API group \"resource.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ResourceSlice"
	E1220 02:11:20.965512       1 reflector.go:205] "Failed to watch" err="failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"storageclasses\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.StorageClass"
	E1220 02:11:21.832650       1 reflector.go:205] "Failed to watch" err="failed to list *v1.DeviceClass: deviceclasses.resource.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"deviceclasses\" in API group \"resource.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.DeviceClass"
	E1220 02:11:21.832947       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Node: nodes is forbidden: User \"system:kube-scheduler\" cannot list resource \"nodes\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Node"
	E1220 02:11:21.847314       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User \"system:kube-scheduler\" cannot list resource \"replicationcontrollers\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ReplicationController"
	E1220 02:11:21.848248       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Namespace: namespaces is forbidden: User \"system:kube-scheduler\" cannot list resource \"namespaces\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Namespace"
	E1220 02:11:21.883915       1 reflector.go:205] "Failed to watch" err="failed to list *v1.VolumeAttachment: volumeattachments.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"volumeattachments\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.VolumeAttachment"
	E1220 02:11:21.922925       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Pod: pods is forbidden: User \"system:kube-scheduler\" cannot list resource \"pods\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Pod"
	E1220 02:11:21.948354       1 reflector.go:205] "Failed to watch" err="failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User \"system:kube-scheduler\" cannot list resource \"statefulsets\" in API group \"apps\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.StatefulSet"
	E1220 02:11:21.988956       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ResourceClaim: resourceclaims.resource.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"resourceclaims\" in API group \"resource.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ResourceClaim"
	E1220 02:11:22.072320       1 reflector.go:205] "Failed to watch" err="failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csinodes\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSINode"
	E1220 02:11:22.112122       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User \"system:kube-scheduler\" cannot list resource \"persistentvolumes\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.PersistentVolume"
	E1220 02:11:22.125974       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User \"system:kube-scheduler\" cannot list resource \"poddisruptionbudgets\" in API group \"policy\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.PodDisruptionBudget"
	E1220 02:11:22.146170       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User \"system:kube-scheduler\" cannot list resource \"persistentvolumeclaims\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.PersistentVolumeClaim"
	E1220 02:11:22.171595       1 reflector.go:205] "Failed to watch" err="failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csistoragecapacities\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIStorageCapacity"
	E1220 02:11:22.226735       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ConfigMap: configmaps \"extension-apiserver-authentication\" is forbidden: User \"system:kube-scheduler\" cannot list resource \"configmaps\" in API group \"\" in the namespace \"kube-system\"" logger="UnhandledError" reflector="runtime/asm_amd64.s:1700" type="*v1.ConfigMap"
	I1220 02:11:25.144517       1 shared_informer.go:356] "Caches are synced" controller="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I1220 02:12:15.706842       1 secure_serving.go:259] Stopped listening on 127.0.0.1:10259
	I1220 02:12:15.706898       1 server.go:263] "[graceful-termination] secure server has stopped listening"
	I1220 02:12:15.706917       1 tlsconfig.go:258] "Shutting down DynamicServingCertificateController"
	I1220 02:12:15.706972       1 configmap_cafile_content.go:226] "Shutting down controller" name="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I1220 02:12:15.707164       1 server.go:265] "[graceful-termination] secure server is exiting"
	E1220 02:12:15.707186       1 run.go:72] "command failed" err="finished without leader elect"
	
	
	==> kubelet <==
	Dec 20 02:14:20 default-k8s-diff-port-032958 kubelet[4206]: I1220 02:14:20.306089    4206 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="959487a2071a7d265b217d3aee2b7e4fbafb02bb0585f7ff40beae30aa17b725"
	Dec 20 02:14:20 default-k8s-diff-port-032958 kubelet[4206]: I1220 02:14:20.330357    4206 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="1ae4fd44c29005031aebaf78608172fd0e41f69bee4dd72c3ea114e035fc7e8e"
	Dec 20 02:14:20 default-k8s-diff-port-032958 kubelet[4206]: I1220 02:14:20.330548    4206 kubelet.go:3220] "Creating a mirror pod for static pod" pod="kube-system/etcd-default-k8s-diff-port-032958"
	Dec 20 02:14:20 default-k8s-diff-port-032958 kubelet[4206]: E1220 02:14:20.342746    4206 kubelet.go:3222] "Failed creating a mirror pod" err="pods \"etcd-default-k8s-diff-port-032958\" already exists" pod="kube-system/etcd-default-k8s-diff-port-032958"
	Dec 20 02:14:20 default-k8s-diff-port-032958 kubelet[4206]: I1220 02:14:20.617352    4206 apiserver.go:52] "Watching apiserver"
	Dec 20 02:14:20 default-k8s-diff-port-032958 kubelet[4206]: I1220 02:14:20.683834    4206 desired_state_of_world_populator.go:154] "Finished populating initial desired state of world"
	Dec 20 02:14:20 default-k8s-diff-port-032958 kubelet[4206]: I1220 02:14:20.742502    4206 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/07a41d99-89a6-4d25-b7cf-57f49fbdea5a-lib-modules\") pod \"kube-proxy-22tlj\" (UID: \"07a41d99-89a6-4d25-b7cf-57f49fbdea5a\") " pod="kube-system/kube-proxy-22tlj"
	Dec 20 02:14:20 default-k8s-diff-port-032958 kubelet[4206]: I1220 02:14:20.743177    4206 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/07a41d99-89a6-4d25-b7cf-57f49fbdea5a-xtables-lock\") pod \"kube-proxy-22tlj\" (UID: \"07a41d99-89a6-4d25-b7cf-57f49fbdea5a\") " pod="kube-system/kube-proxy-22tlj"
	Dec 20 02:14:20 default-k8s-diff-port-032958 kubelet[4206]: I1220 02:14:20.743223    4206 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tmp\" (UniqueName: \"kubernetes.io/host-path/a74ca514-b136-40a6-9fd7-27c96e23bca7-tmp\") pod \"storage-provisioner\" (UID: \"a74ca514-b136-40a6-9fd7-27c96e23bca7\") " pod="kube-system/storage-provisioner"
	Dec 20 02:14:20 default-k8s-diff-port-032958 kubelet[4206]: I1220 02:14:20.941330    4206 scope.go:117] "RemoveContainer" containerID="e389ed009c414813f08a16331049a1f7b81ae99102e1d3eee00456652f70d78e"
	Dec 20 02:14:21 default-k8s-diff-port-032958 kubelet[4206]: E1220 02:14:21.187714    4206 log.go:32] "PullImage from image service failed" err="rpc error: code = Unknown desc = Docker Image Format v1 and Docker Image manifest version 2, schema 1 support has been removed. Suggest the author of registry.k8s.io/echoserver:1.4 to upgrade the image to the OCI Format or Docker Image manifest v2, schema 2. More information at https://docs.docker.com/go/deprecated-image-specs/" image="registry.k8s.io/echoserver:1.4"
	Dec 20 02:14:21 default-k8s-diff-port-032958 kubelet[4206]: E1220 02:14:21.188551    4206 kuberuntime_image.go:43] "Failed to pull image" err="Docker Image Format v1 and Docker Image manifest version 2, schema 1 support has been removed. Suggest the author of registry.k8s.io/echoserver:1.4 to upgrade the image to the OCI Format or Docker Image manifest v2, schema 2. More information at https://docs.docker.com/go/deprecated-image-specs/" image="registry.k8s.io/echoserver:1.4"
	Dec 20 02:14:21 default-k8s-diff-port-032958 kubelet[4206]: E1220 02:14:21.190144    4206 kuberuntime_manager.go:1449] "Unhandled Error" err="container dashboard-metrics-scraper start failed in pod dashboard-metrics-scraper-6ffb444bf9-wzcc7_kubernetes-dashboard(6951d269-7815-46e0-bfd0-c9dba02d7a47): ErrImagePull: Docker Image Format v1 and Docker Image manifest version 2, schema 1 support has been removed. Suggest the author of registry.k8s.io/echoserver:1.4 to upgrade the image to the OCI Format or Docker Image manifest v2, schema 2. More information at https://docs.docker.com/go/deprecated-image-specs/" logger="UnhandledError"
	Dec 20 02:14:21 default-k8s-diff-port-032958 kubelet[4206]: E1220 02:14:21.191545    4206 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with ErrImagePull: \"Docker Image Format v1 and Docker Image manifest version 2, schema 1 support has been removed. Suggest the author of registry.k8s.io/echoserver:1.4 to upgrade the image to the OCI Format or Docker Image manifest v2, schema 2. More information at https://docs.docker.com/go/deprecated-image-specs/\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-wzcc7" podUID="6951d269-7815-46e0-bfd0-c9dba02d7a47"
	Dec 20 02:14:21 default-k8s-diff-port-032958 kubelet[4206]: E1220 02:14:21.218131    4206 log.go:32] "PullImage from image service failed" err="rpc error: code = Unknown desc = Error response from daemon: Get \"https://fake.domain/v2/\": dial tcp: lookup fake.domain on 192.168.122.1:53: no such host" image="fake.domain/registry.k8s.io/echoserver:1.4"
	Dec 20 02:14:21 default-k8s-diff-port-032958 kubelet[4206]: E1220 02:14:21.218192    4206 kuberuntime_image.go:43] "Failed to pull image" err="Error response from daemon: Get \"https://fake.domain/v2/\": dial tcp: lookup fake.domain on 192.168.122.1:53: no such host" image="fake.domain/registry.k8s.io/echoserver:1.4"
	Dec 20 02:14:21 default-k8s-diff-port-032958 kubelet[4206]: E1220 02:14:21.218346    4206 kuberuntime_manager.go:1449] "Unhandled Error" err="container metrics-server start failed in pod metrics-server-746fcd58dc-r9hzl_kube-system(ea98af6d-2555-48e1-9403-91cdbace7b1c): ErrImagePull: Error response from daemon: Get \"https://fake.domain/v2/\": dial tcp: lookup fake.domain on 192.168.122.1:53: no such host" logger="UnhandledError"
	Dec 20 02:14:21 default-k8s-diff-port-032958 kubelet[4206]: E1220 02:14:21.219866    4206 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ErrImagePull: \"Error response from daemon: Get \\\"https://fake.domain/v2/\\\": dial tcp: lookup fake.domain on 192.168.122.1:53: no such host\"" pod="kube-system/metrics-server-746fcd58dc-r9hzl" podUID="ea98af6d-2555-48e1-9403-91cdbace7b1c"
	Dec 20 02:14:21 default-k8s-diff-port-032958 kubelet[4206]: I1220 02:14:21.415555    4206 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="f14a7d35a9c218a36064019d8d70cd5e2dc10c8fff7e745b9c07943ea6e37833"
	Dec 20 02:14:21 default-k8s-diff-port-032958 kubelet[4206]: I1220 02:14:21.445678    4206 kubelet.go:3220] "Creating a mirror pod for static pod" pod="kube-system/etcd-default-k8s-diff-port-032958"
	Dec 20 02:14:21 default-k8s-diff-port-032958 kubelet[4206]: I1220 02:14:21.445968    4206 kubelet.go:3220] "Creating a mirror pod for static pod" pod="kube-system/kube-apiserver-default-k8s-diff-port-032958"
	Dec 20 02:14:21 default-k8s-diff-port-032958 kubelet[4206]: I1220 02:14:21.446323    4206 kubelet.go:3220] "Creating a mirror pod for static pod" pod="kube-system/kube-scheduler-default-k8s-diff-port-032958"
	Dec 20 02:14:21 default-k8s-diff-port-032958 kubelet[4206]: E1220 02:14:21.476713    4206 kubelet.go:3222] "Failed creating a mirror pod" err="pods \"kube-scheduler-default-k8s-diff-port-032958\" already exists" pod="kube-system/kube-scheduler-default-k8s-diff-port-032958"
	Dec 20 02:14:21 default-k8s-diff-port-032958 kubelet[4206]: E1220 02:14:21.478173    4206 kubelet.go:3222] "Failed creating a mirror pod" err="pods \"etcd-default-k8s-diff-port-032958\" already exists" pod="kube-system/etcd-default-k8s-diff-port-032958"
	Dec 20 02:14:21 default-k8s-diff-port-032958 kubelet[4206]: E1220 02:14:21.479470    4206 kubelet.go:3222] "Failed creating a mirror pod" err="pods \"kube-apiserver-default-k8s-diff-port-032958\" already exists" pod="kube-system/kube-apiserver-default-k8s-diff-port-032958"
	
	
	==> kubernetes-dashboard [3d0dc5e4eaf5] <==
	2025/12/20 02:13:33 Using namespace: kubernetes-dashboard
	2025/12/20 02:13:33 Using in-cluster config to connect to apiserver
	2025/12/20 02:13:33 Using secret token for csrf signing
	2025/12/20 02:13:33 Initializing csrf token from kubernetes-dashboard-csrf secret
	2025/12/20 02:13:33 Empty token. Generating and storing in a secret kubernetes-dashboard-csrf
	2025/12/20 02:13:33 Successful initial request to the apiserver, version: v1.34.3
	2025/12/20 02:13:33 Generating JWE encryption key
	2025/12/20 02:13:33 New synchronizer has been registered: kubernetes-dashboard-key-holder-kubernetes-dashboard. Starting
	2025/12/20 02:13:33 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kubernetes-dashboard
	2025/12/20 02:13:33 Initializing JWE encryption key from synchronized object
	2025/12/20 02:13:33 Creating in-cluster Sidecar client
	2025/12/20 02:13:33 Metric client health check failed: the server is currently unable to handle the request (get services dashboard-metrics-scraper). Retrying in 30 seconds.
	2025/12/20 02:13:33 Serving insecurely on HTTP port: 9090
	2025/12/20 02:14:19 Metric client health check failed: the server is currently unable to handle the request (get services dashboard-metrics-scraper). Retrying in 30 seconds.
	2025/12/20 02:13:33 Starting overwatch
	
	
	==> storage-provisioner [db82439a8277] <==
	I1220 02:14:21.324067       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	I1220 02:14:21.372255       1 storage_provisioner.go:141] Storage provisioner initialized, now starting service!
	I1220 02:14:21.373462       1 leaderelection.go:243] attempting to acquire leader lease kube-system/k8s.io-minikube-hostpath...
	W1220 02:14:21.382159       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	
	
	==> storage-provisioner [e389ed009c41] <==
	I1220 02:13:15.862545       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	F1220 02:13:45.872285       1 main.go:39] error getting server version: Get "https://10.96.0.1:443/version?timeout=32s": dial tcp 10.96.0.1:443: i/o timeout
	

                                                
                                                
-- /stdout --
helpers_test.go:263: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p default-k8s-diff-port-032958 -n default-k8s-diff-port-032958
I1220 02:14:24.115175   13018 config.go:182] Loaded profile config "custom-flannel-503505": Driver=kvm2, ContainerRuntime=docker, KubernetesVersion=v1.34.3
helpers_test.go:270: (dbg) Run:  kubectl --context default-k8s-diff-port-032958 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:281: non-running pods: metrics-server-746fcd58dc-r9hzl dashboard-metrics-scraper-6ffb444bf9-wzcc7
helpers_test.go:283: ======> post-mortem[TestStartStop/group/default-k8s-diff-port/serial/Pause]: describe non-running pods <======
helpers_test.go:286: (dbg) Run:  kubectl --context default-k8s-diff-port-032958 describe pod metrics-server-746fcd58dc-r9hzl dashboard-metrics-scraper-6ffb444bf9-wzcc7
helpers_test.go:286: (dbg) Non-zero exit: kubectl --context default-k8s-diff-port-032958 describe pod metrics-server-746fcd58dc-r9hzl dashboard-metrics-scraper-6ffb444bf9-wzcc7: exit status 1 (88.623563ms)

                                                
                                                
** stderr ** 
	Error from server (NotFound): pods "metrics-server-746fcd58dc-r9hzl" not found
	Error from server (NotFound): pods "dashboard-metrics-scraper-6ffb444bf9-wzcc7" not found

                                                
                                                
** /stderr **
helpers_test.go:288: kubectl --context default-k8s-diff-port-032958 describe pod metrics-server-746fcd58dc-r9hzl dashboard-metrics-scraper-6ffb444bf9-wzcc7: exit status 1
helpers_test.go:223: -----------------------post-mortem--------------------------------
helpers_test.go:224: ======>  post-mortem[TestStartStop/group/default-k8s-diff-port/serial/Pause]: network settings <======
helpers_test.go:231: HOST ENV snapshots: PROXY env: HTTP_PROXY="<empty>" HTTPS_PROXY="<empty>" NO_PROXY="<empty>"
helpers_test.go:248: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p default-k8s-diff-port-032958 -n default-k8s-diff-port-032958
helpers_test.go:253: <<< TestStartStop/group/default-k8s-diff-port/serial/Pause FAILED: start of post-mortem logs <<<
helpers_test.go:254: ======>  post-mortem[TestStartStop/group/default-k8s-diff-port/serial/Pause]: minikube logs <======
helpers_test.go:256: (dbg) Run:  out/minikube-linux-amd64 -p default-k8s-diff-port-032958 logs -n 25
helpers_test.go:256: (dbg) Done: out/minikube-linux-amd64 -p default-k8s-diff-port-032958 logs -n 25: (1.589780324s)
helpers_test.go:261: TestStartStop/group/default-k8s-diff-port/serial/Pause logs: 
-- stdout --
	
	==> Audit <==
	┌─────────┬──────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┬──────────────────────────────┬──────────┬─────────┬─────────────────────┬─────────────────────┐
	│ COMMAND │                                                                 ARGS                                                                 │           PROFILE            │   USER   │ VERSION │     START TIME      │      END TIME       │
	├─────────┼──────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┼──────────────────────────────┼──────────┼─────────┼─────────────────────┼─────────────────────┤
	│ ssh     │ -p kindnet-503505 sudo journalctl -xeu kubelet --all --full --no-pager                                                               │ kindnet-503505               │ minitest │ v1.37.0 │ 20 Dec 25 02:13 UTC │ 20 Dec 25 02:13 UTC │
	│ ssh     │ -p kindnet-503505 sudo cat /etc/kubernetes/kubelet.conf                                                                              │ kindnet-503505               │ minitest │ v1.37.0 │ 20 Dec 25 02:13 UTC │ 20 Dec 25 02:13 UTC │
	│ ssh     │ -p kindnet-503505 sudo cat /var/lib/kubelet/config.yaml                                                                              │ kindnet-503505               │ minitest │ v1.37.0 │ 20 Dec 25 02:13 UTC │ 20 Dec 25 02:13 UTC │
	│ ssh     │ -p kindnet-503505 sudo systemctl status docker --all --full --no-pager                                                               │ kindnet-503505               │ minitest │ v1.37.0 │ 20 Dec 25 02:13 UTC │ 20 Dec 25 02:13 UTC │
	│ ssh     │ -p kindnet-503505 sudo systemctl cat docker --no-pager                                                                               │ kindnet-503505               │ minitest │ v1.37.0 │ 20 Dec 25 02:13 UTC │ 20 Dec 25 02:13 UTC │
	│ ssh     │ -p kindnet-503505 sudo cat /etc/docker/daemon.json                                                                                   │ kindnet-503505               │ minitest │ v1.37.0 │ 20 Dec 25 02:13 UTC │ 20 Dec 25 02:13 UTC │
	│ ssh     │ -p kindnet-503505 sudo docker system info                                                                                            │ kindnet-503505               │ minitest │ v1.37.0 │ 20 Dec 25 02:13 UTC │ 20 Dec 25 02:13 UTC │
	│ ssh     │ -p kindnet-503505 sudo systemctl status cri-docker --all --full --no-pager                                                           │ kindnet-503505               │ minitest │ v1.37.0 │ 20 Dec 25 02:13 UTC │ 20 Dec 25 02:13 UTC │
	│ ssh     │ -p kindnet-503505 sudo systemctl cat cri-docker --no-pager                                                                           │ kindnet-503505               │ minitest │ v1.37.0 │ 20 Dec 25 02:13 UTC │ 20 Dec 25 02:13 UTC │
	│ ssh     │ -p kindnet-503505 sudo cat /etc/systemd/system/cri-docker.service.d/10-cni.conf                                                      │ kindnet-503505               │ minitest │ v1.37.0 │ 20 Dec 25 02:13 UTC │ 20 Dec 25 02:13 UTC │
	│ ssh     │ -p kindnet-503505 sudo cat /usr/lib/systemd/system/cri-docker.service                                                                │ kindnet-503505               │ minitest │ v1.37.0 │ 20 Dec 25 02:13 UTC │ 20 Dec 25 02:13 UTC │
	│ ssh     │ -p kindnet-503505 sudo cri-dockerd --version                                                                                         │ kindnet-503505               │ minitest │ v1.37.0 │ 20 Dec 25 02:13 UTC │ 20 Dec 25 02:13 UTC │
	│ ssh     │ -p kindnet-503505 sudo systemctl status containerd --all --full --no-pager                                                           │ kindnet-503505               │ minitest │ v1.37.0 │ 20 Dec 25 02:13 UTC │ 20 Dec 25 02:13 UTC │
	│ ssh     │ -p kindnet-503505 sudo systemctl cat containerd --no-pager                                                                           │ kindnet-503505               │ minitest │ v1.37.0 │ 20 Dec 25 02:13 UTC │ 20 Dec 25 02:13 UTC │
	│ ssh     │ -p kindnet-503505 sudo cat /lib/systemd/system/containerd.service                                                                    │ kindnet-503505               │ minitest │ v1.37.0 │ 20 Dec 25 02:13 UTC │ 20 Dec 25 02:13 UTC │
	│ ssh     │ -p kindnet-503505 sudo cat /etc/containerd/config.toml                                                                               │ kindnet-503505               │ minitest │ v1.37.0 │ 20 Dec 25 02:13 UTC │ 20 Dec 25 02:13 UTC │
	│ ssh     │ -p kindnet-503505 sudo containerd config dump                                                                                        │ kindnet-503505               │ minitest │ v1.37.0 │ 20 Dec 25 02:13 UTC │ 20 Dec 25 02:13 UTC │
	│ ssh     │ -p kindnet-503505 sudo systemctl status crio --all --full --no-pager                                                                 │ kindnet-503505               │ minitest │ v1.37.0 │ 20 Dec 25 02:13 UTC │                     │
	│ ssh     │ -p kindnet-503505 sudo systemctl cat crio --no-pager                                                                                 │ kindnet-503505               │ minitest │ v1.37.0 │ 20 Dec 25 02:13 UTC │ 20 Dec 25 02:13 UTC │
	│ ssh     │ -p kindnet-503505 sudo find /etc/crio -type f -exec sh -c 'echo {}; cat {}' \;                                                       │ kindnet-503505               │ minitest │ v1.37.0 │ 20 Dec 25 02:13 UTC │ 20 Dec 25 02:13 UTC │
	│ ssh     │ -p kindnet-503505 sudo crio config                                                                                                   │ kindnet-503505               │ minitest │ v1.37.0 │ 20 Dec 25 02:13 UTC │ 20 Dec 25 02:13 UTC │
	│ delete  │ -p kindnet-503505                                                                                                                    │ kindnet-503505               │ minitest │ v1.37.0 │ 20 Dec 25 02:13 UTC │ 20 Dec 25 02:13 UTC │
	│ start   │ -p false-503505 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=false --driver=kvm2  --container-runtime=docker │ false-503505                 │ minitest │ v1.37.0 │ 20 Dec 25 02:13 UTC │                     │
	│ unpause │ -p default-k8s-diff-port-032958 --alsologtostderr -v=1                                                                               │ default-k8s-diff-port-032958 │ minitest │ v1.37.0 │ 20 Dec 25 02:14 UTC │ 20 Dec 25 02:14 UTC │
	│ ssh     │ -p custom-flannel-503505 pgrep -a kubelet                                                                                            │ custom-flannel-503505        │ minitest │ v1.37.0 │ 20 Dec 25 02:14 UTC │ 20 Dec 25 02:14 UTC │
	└─────────┴──────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┴──────────────────────────────┴──────────┴─────────┴─────────────────────┴─────────────────────┘
	
	
	==> Last Start <==
	Log file created at: 2025/12/20 02:13:53
	Running on machine: minitest-vm-9d09530a
	Binary: Built with gc go1.25.5 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1220 02:13:53.658426   38979 out.go:360] Setting OutFile to fd 1 ...
	I1220 02:13:53.658597   38979 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1220 02:13:53.658612   38979 out.go:374] Setting ErrFile to fd 2...
	I1220 02:13:53.658620   38979 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1220 02:13:53.658880   38979 root.go:338] Updating PATH: /home/minitest/minikube-integration/7cd9f41b7421760cf1f1eaa8725bdb975037b06d-7160/.minikube/bin
	I1220 02:13:53.659482   38979 out.go:368] Setting JSON to false
	I1220 02:13:53.660578   38979 start.go:133] hostinfo: {"hostname":"minitest-vm-9d09530a.c.k8s-infra-e2e-boskos-103.internal","uptime":3547,"bootTime":1766193287,"procs":209,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"24.04","kernelVersion":"6.14.0-1021-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"324b1d65-3a78-4886-9ab4-95ed3c96a31c"}
	I1220 02:13:53.660687   38979 start.go:143] virtualization: kvm guest
	I1220 02:13:53.662866   38979 out.go:179] * [false-503505] minikube v1.37.0 on Ubuntu 24.04 (kvm/amd64)
	I1220 02:13:53.664260   38979 notify.go:221] Checking for updates...
	I1220 02:13:53.664290   38979 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1220 02:13:53.665824   38979 out.go:179]   - KUBECONFIG=/home/minitest/minikube-integration/7cd9f41b7421760cf1f1eaa8725bdb975037b06d-7160/kubeconfig
	I1220 02:13:53.667283   38979 out.go:179]   - MINIKUBE_HOME=/home/minitest/minikube-integration/7cd9f41b7421760cf1f1eaa8725bdb975037b06d-7160/.minikube
	I1220 02:13:53.668904   38979 out.go:179]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I1220 02:13:53.670341   38979 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I1220 02:13:53.672156   38979 config.go:182] Loaded profile config "calico-503505": Driver=kvm2, ContainerRuntime=docker, KubernetesVersion=v1.34.3
	I1220 02:13:53.672297   38979 config.go:182] Loaded profile config "custom-flannel-503505": Driver=kvm2, ContainerRuntime=docker, KubernetesVersion=v1.34.3
	I1220 02:13:53.672434   38979 config.go:182] Loaded profile config "default-k8s-diff-port-032958": Driver=kvm2, ContainerRuntime=docker, KubernetesVersion=v1.34.3
	I1220 02:13:53.672545   38979 config.go:182] Loaded profile config "guest-073858": Driver=kvm2, ContainerRuntime=docker, KubernetesVersion=v0.0.0
	I1220 02:13:53.672679   38979 driver.go:422] Setting default libvirt URI to qemu:///system
	I1220 02:13:53.714352   38979 out.go:179] * Using the kvm2 driver based on user configuration
	I1220 02:13:53.715582   38979 start.go:309] selected driver: kvm2
	I1220 02:13:53.715609   38979 start.go:928] validating driver "kvm2" against <nil>
	I1220 02:13:53.715626   38979 start.go:939] status for kvm2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1220 02:13:53.716847   38979 start_flags.go:329] no existing cluster config was found, will generate one from the flags 
	I1220 02:13:53.717254   38979 start_flags.go:995] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1220 02:13:53.717297   38979 cni.go:84] Creating CNI manager for "false"
	I1220 02:13:53.717349   38979 start.go:353] cluster config:
	{Name:false-503505 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765966054-22186@sha256:1c173489767e6632c410d2554f1a2272f032a423dd528157e201daadfe3c43f0 Memory:3072 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.3 ClusterName:false-503505 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:doc
ker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:false} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.34.3 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GP
Us: AutoPauseInterval:1m0s}
	I1220 02:13:53.717508   38979 iso.go:125] acquiring lock: {Name:mk8cff2fd2ec419d0f1f974993910ae0235f0b9c Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1220 02:13:53.719137   38979 out.go:179] * Starting "false-503505" primary control-plane node in "false-503505" cluster
	I1220 02:13:53.720475   38979 preload.go:188] Checking if preload exists for k8s version v1.34.3 and runtime docker
	I1220 02:13:53.720519   38979 preload.go:203] Found local preload: /home/minitest/minikube-integration/7cd9f41b7421760cf1f1eaa8725bdb975037b06d-7160/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.3-docker-overlay2-amd64.tar.lz4
	I1220 02:13:53.720529   38979 cache.go:65] Caching tarball of preloaded images
	I1220 02:13:53.720653   38979 preload.go:251] Found /home/minitest/minikube-integration/7cd9f41b7421760cf1f1eaa8725bdb975037b06d-7160/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.3-docker-overlay2-amd64.tar.lz4 in cache, skipping download
	I1220 02:13:53.720670   38979 cache.go:68] Finished verifying existence of preloaded tar for v1.34.3 on docker
	I1220 02:13:53.720801   38979 profile.go:143] Saving config to /home/minitest/minikube-integration/7cd9f41b7421760cf1f1eaa8725bdb975037b06d-7160/.minikube/profiles/false-503505/config.json ...
	I1220 02:13:53.720830   38979 lock.go:35] WriteFile acquiring /home/minitest/minikube-integration/7cd9f41b7421760cf1f1eaa8725bdb975037b06d-7160/.minikube/profiles/false-503505/config.json: {Name:mkc8b6869a0bb6c3a942663395236fb8c2775a51 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1220 02:13:53.721027   38979 start.go:360] acquireMachinesLock for false-503505: {Name:mkeb3229b5d18611c16c8e938b31492b9b6546b6 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I1220 02:13:53.721080   38979 start.go:364] duration metric: took 32.113µs to acquireMachinesLock for "false-503505"
	I1220 02:13:53.721108   38979 start.go:93] Provisioning new machine with config: &{Name:false-503505 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/22186/minikube-v1.37.0-1765965980-22186-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765966054-22186@sha256:1c173489767e6632c410d2554f1a2272f032a423dd528157e201daadfe3c43f0 Memory:3072 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kubern
etesVersion:v1.34.3 ClusterName:false-503505 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:false} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.34.3 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryM
irror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.34.3 ContainerRuntime:docker ControlPlane:true Worker:true}
	I1220 02:13:53.721191   38979 start.go:125] createHost starting for "" (driver="kvm2")
	I1220 02:13:53.104657   37878 kubeadm.go:319] [certs] Generating "front-proxy-client" certificate and key
	I1220 02:13:53.203649   37878 kubeadm.go:319] [certs] Generating "etcd/ca" certificate and key
	I1220 02:13:53.414002   37878 kubeadm.go:319] [certs] Generating "etcd/server" certificate and key
	I1220 02:13:53.414235   37878 kubeadm.go:319] [certs] etcd/server serving cert is signed for DNS names [custom-flannel-503505 localhost] and IPs [192.168.72.110 127.0.0.1 ::1]
	I1220 02:13:53.718885   37878 kubeadm.go:319] [certs] Generating "etcd/peer" certificate and key
	I1220 02:13:53.719606   37878 kubeadm.go:319] [certs] etcd/peer serving cert is signed for DNS names [custom-flannel-503505 localhost] and IPs [192.168.72.110 127.0.0.1 ::1]
	I1220 02:13:54.333369   37878 kubeadm.go:319] [certs] Generating "etcd/healthcheck-client" certificate and key
	I1220 02:13:54.424119   37878 kubeadm.go:319] [certs] Generating "apiserver-etcd-client" certificate and key
	I1220 02:13:54.440070   37878 kubeadm.go:319] [certs] Generating "sa" key and public key
	I1220 02:13:54.440221   37878 kubeadm.go:319] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I1220 02:13:54.643883   37878 kubeadm.go:319] [kubeconfig] Writing "admin.conf" kubeconfig file
	I1220 02:13:54.882013   37878 kubeadm.go:319] [kubeconfig] Writing "super-admin.conf" kubeconfig file
	I1220 02:13:54.904688   37878 kubeadm.go:319] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I1220 02:13:55.025586   37878 kubeadm.go:319] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I1220 02:13:55.145485   37878 kubeadm.go:319] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I1220 02:13:55.145626   37878 kubeadm.go:319] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I1220 02:13:55.148326   37878 kubeadm.go:319] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	W1220 02:13:54.723698   37762 node_ready.go:57] node "calico-503505" has "Ready":"False" status (will retry)
	W1220 02:13:57.088471   37762 node_ready.go:57] node "calico-503505" has "Ready":"False" status (will retry)
	I1220 02:13:55.150289   37878 out.go:252]   - Booting up control plane ...
	I1220 02:13:55.150458   37878 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I1220 02:13:55.151333   37878 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I1220 02:13:55.152227   37878 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I1220 02:13:55.175699   37878 kubeadm.go:319] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I1220 02:13:55.175981   37878 kubeadm.go:319] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	I1220 02:13:55.186275   37878 kubeadm.go:319] [patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	I1220 02:13:55.186852   37878 kubeadm.go:319] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I1220 02:13:55.186945   37878 kubeadm.go:319] [kubelet-start] Starting the kubelet
	I1220 02:13:55.443272   37878 kubeadm.go:319] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	I1220 02:13:55.443453   37878 kubeadm.go:319] [kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	I1220 02:13:57.443421   37878 kubeadm.go:319] [kubelet-check] The kubelet is healthy after 2.001962214s
	I1220 02:13:57.453249   37878 kubeadm.go:319] [control-plane-check] Waiting for healthy control plane components. This can take up to 4m0s
	I1220 02:13:57.453392   37878 kubeadm.go:319] [control-plane-check] Checking kube-apiserver at https://192.168.72.110:8443/livez
	I1220 02:13:57.453521   37878 kubeadm.go:319] [control-plane-check] Checking kube-controller-manager at https://127.0.0.1:10257/healthz
	I1220 02:13:57.453636   37878 kubeadm.go:319] [control-plane-check] Checking kube-scheduler at https://127.0.0.1:10259/livez
	I1220 02:13:53.723129   38979 out.go:252] * Creating kvm2 VM (CPUs=2, Memory=3072MB, Disk=20000MB) ...
	I1220 02:13:53.723383   38979 start.go:159] libmachine.API.Create for "false-503505" (driver="kvm2")
	I1220 02:13:53.723423   38979 client.go:173] LocalClient.Create starting
	I1220 02:13:53.723510   38979 main.go:144] libmachine: Reading certificate data from /home/minitest/minikube-integration/7cd9f41b7421760cf1f1eaa8725bdb975037b06d-7160/.minikube/certs/ca.pem
	I1220 02:13:53.723557   38979 main.go:144] libmachine: Decoding PEM data...
	I1220 02:13:53.723581   38979 main.go:144] libmachine: Parsing certificate...
	I1220 02:13:53.723676   38979 main.go:144] libmachine: Reading certificate data from /home/minitest/minikube-integration/7cd9f41b7421760cf1f1eaa8725bdb975037b06d-7160/.minikube/certs/cert.pem
	I1220 02:13:53.723706   38979 main.go:144] libmachine: Decoding PEM data...
	I1220 02:13:53.723725   38979 main.go:144] libmachine: Parsing certificate...
	I1220 02:13:53.724182   38979 main.go:144] libmachine: creating domain...
	I1220 02:13:53.724217   38979 main.go:144] libmachine: creating network...
	I1220 02:13:53.725920   38979 main.go:144] libmachine: found existing default network
	I1220 02:13:53.726255   38979 main.go:144] libmachine: <network connections='4'>
	  <name>default</name>
	  <uuid>650ca552-1913-49ac-a1fd-736d0c584a06</uuid>
	  <forward mode='nat'>
	    <nat>
	      <port start='1024' end='65535'/>
	    </nat>
	  </forward>
	  <bridge name='virbr0' stp='on' delay='0'/>
	  <mac address='52:54:00:de:58:ff'/>
	  <ip address='192.168.122.1' netmask='255.255.255.0'>
	    <dhcp>
	      <range start='192.168.122.2' end='192.168.122.254'/>
	    </dhcp>
	  </ip>
	</network>
	
	I1220 02:13:53.727630   38979 network.go:211] skipping subnet 192.168.39.0/24 that is taken: &{IP:192.168.39.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.39.0/24 Gateway:192.168.39.1 ClientMin:192.168.39.2 ClientMax:192.168.39.254 Broadcast:192.168.39.255 IsPrivate:true Interface:{IfaceName:virbr1 IfaceIPv4:192.168.39.1 IfaceMTU:1500 IfaceMAC:52:54:00:e8:02:c4} reservation:<nil>}
	I1220 02:13:53.728421   38979 network.go:211] skipping subnet 192.168.50.0/24 that is taken: &{IP:192.168.50.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.50.0/24 Gateway:192.168.50.1 ClientMin:192.168.50.2 ClientMax:192.168.50.254 Broadcast:192.168.50.255 IsPrivate:true Interface:{IfaceName:virbr2 IfaceIPv4:192.168.50.1 IfaceMTU:1500 IfaceMAC:52:54:00:8b:7d:ff} reservation:<nil>}
	I1220 02:13:53.729869   38979 network.go:206] using free private subnet 192.168.61.0/24: &{IP:192.168.61.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.61.0/24 Gateway:192.168.61.1 ClientMin:192.168.61.2 ClientMax:192.168.61.254 Broadcast:192.168.61.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:0xc001aac760}
	I1220 02:13:53.729965   38979 main.go:144] libmachine: defining private network:
	
	<network>
	  <name>mk-false-503505</name>
	  <dns enable='no'/>
	  <ip address='192.168.61.1' netmask='255.255.255.0'>
	    <dhcp>
	      <range start='192.168.61.2' end='192.168.61.253'/>
	    </dhcp>
	  </ip>
	</network>
	
	I1220 02:13:53.736168   38979 main.go:144] libmachine: creating private network mk-false-503505 192.168.61.0/24...
	I1220 02:13:53.810612   38979 main.go:144] libmachine: private network mk-false-503505 192.168.61.0/24 created
	I1220 02:13:53.810976   38979 main.go:144] libmachine: <network>
	  <name>mk-false-503505</name>
	  <uuid>145d091e-eda6-4cfe-8946-ea394cfc6f9d</uuid>
	  <bridge name='virbr3' stp='on' delay='0'/>
	  <mac address='52:54:00:b5:b9:98'/>
	  <dns enable='no'/>
	  <ip address='192.168.61.1' netmask='255.255.255.0'>
	    <dhcp>
	      <range start='192.168.61.2' end='192.168.61.253'/>
	    </dhcp>
	  </ip>
	</network>
	
	I1220 02:13:53.811017   38979 main.go:144] libmachine: setting up store path in /home/minitest/minikube-integration/7cd9f41b7421760cf1f1eaa8725bdb975037b06d-7160/.minikube/machines/false-503505 ...
	I1220 02:13:53.811066   38979 main.go:144] libmachine: building disk image from file:///home/minitest/minikube-integration/7cd9f41b7421760cf1f1eaa8725bdb975037b06d-7160/.minikube/cache/iso/amd64/minikube-v1.37.0-1765965980-22186-amd64.iso
	I1220 02:13:53.811082   38979 common.go:152] Making disk image using store path: /home/minitest/minikube-integration/7cd9f41b7421760cf1f1eaa8725bdb975037b06d-7160/.minikube
	I1220 02:13:53.811185   38979 main.go:144] libmachine: Downloading /home/minitest/minikube-integration/7cd9f41b7421760cf1f1eaa8725bdb975037b06d-7160/.minikube/cache/boot2docker.iso from file:///home/minitest/minikube-integration/7cd9f41b7421760cf1f1eaa8725bdb975037b06d-7160/.minikube/cache/iso/amd64/minikube-v1.37.0-1765965980-22186-amd64.iso...
	I1220 02:13:54.101881   38979 common.go:159] Creating ssh key: /home/minitest/minikube-integration/7cd9f41b7421760cf1f1eaa8725bdb975037b06d-7160/.minikube/machines/false-503505/id_rsa...
	I1220 02:13:54.171818   38979 common.go:165] Creating raw disk image: /home/minitest/minikube-integration/7cd9f41b7421760cf1f1eaa8725bdb975037b06d-7160/.minikube/machines/false-503505/false-503505.rawdisk...
	I1220 02:13:54.171860   38979 main.go:144] libmachine: Writing magic tar header
	I1220 02:13:54.171878   38979 main.go:144] libmachine: Writing SSH key tar header
	I1220 02:13:54.171952   38979 common.go:179] Fixing permissions on /home/minitest/minikube-integration/7cd9f41b7421760cf1f1eaa8725bdb975037b06d-7160/.minikube/machines/false-503505 ...
	I1220 02:13:54.172017   38979 main.go:144] libmachine: checking permissions on dir: /home/minitest/minikube-integration/7cd9f41b7421760cf1f1eaa8725bdb975037b06d-7160/.minikube/machines/false-503505
	I1220 02:13:54.172042   38979 main.go:144] libmachine: setting executable bit set on /home/minitest/minikube-integration/7cd9f41b7421760cf1f1eaa8725bdb975037b06d-7160/.minikube/machines/false-503505 (perms=drwx------)
	I1220 02:13:54.172055   38979 main.go:144] libmachine: checking permissions on dir: /home/minitest/minikube-integration/7cd9f41b7421760cf1f1eaa8725bdb975037b06d-7160/.minikube/machines
	I1220 02:13:54.172068   38979 main.go:144] libmachine: setting executable bit set on /home/minitest/minikube-integration/7cd9f41b7421760cf1f1eaa8725bdb975037b06d-7160/.minikube/machines (perms=drwxr-xr-x)
	I1220 02:13:54.172080   38979 main.go:144] libmachine: checking permissions on dir: /home/minitest/minikube-integration/7cd9f41b7421760cf1f1eaa8725bdb975037b06d-7160/.minikube
	I1220 02:13:54.172089   38979 main.go:144] libmachine: setting executable bit set on /home/minitest/minikube-integration/7cd9f41b7421760cf1f1eaa8725bdb975037b06d-7160/.minikube (perms=drwxr-xr-x)
	I1220 02:13:54.172097   38979 main.go:144] libmachine: checking permissions on dir: /home/minitest/minikube-integration/7cd9f41b7421760cf1f1eaa8725bdb975037b06d-7160
	I1220 02:13:54.172106   38979 main.go:144] libmachine: setting executable bit set on /home/minitest/minikube-integration/7cd9f41b7421760cf1f1eaa8725bdb975037b06d-7160 (perms=drwxrwxr-x)
	I1220 02:13:54.172116   38979 main.go:144] libmachine: checking permissions on dir: /home/minitest/minikube-integration
	I1220 02:13:54.172127   38979 main.go:144] libmachine: setting executable bit set on /home/minitest/minikube-integration (perms=drwxrwxr-x)
	I1220 02:13:54.172134   38979 main.go:144] libmachine: checking permissions on dir: /home/minitest
	I1220 02:13:54.172143   38979 main.go:144] libmachine: setting executable bit set on /home/minitest (perms=drwxr-x--x)
	I1220 02:13:54.172153   38979 main.go:144] libmachine: checking permissions on dir: /home
	I1220 02:13:54.172162   38979 main.go:144] libmachine: skipping /home - not owner
	I1220 02:13:54.172166   38979 main.go:144] libmachine: defining domain...
	I1220 02:13:54.173523   38979 main.go:144] libmachine: defining domain using XML: 
	<domain type='kvm'>
	  <name>false-503505</name>
	  <memory unit='MiB'>3072</memory>
	  <vcpu>2</vcpu>
	  <features>
	    <acpi/>
	    <apic/>
	    <pae/>
	  </features>
	  <cpu mode='host-passthrough'>
	  </cpu>
	  <os>
	    <type>hvm</type>
	    <boot dev='cdrom'/>
	    <boot dev='hd'/>
	    <bootmenu enable='no'/>
	  </os>
	  <devices>
	    <disk type='file' device='cdrom'>
	      <source file='/home/minitest/minikube-integration/7cd9f41b7421760cf1f1eaa8725bdb975037b06d-7160/.minikube/machines/false-503505/boot2docker.iso'/>
	      <target dev='hdc' bus='scsi'/>
	      <readonly/>
	    </disk>
	    <disk type='file' device='disk'>
	      <driver name='qemu' type='raw' cache='default' io='threads' />
	      <source file='/home/minitest/minikube-integration/7cd9f41b7421760cf1f1eaa8725bdb975037b06d-7160/.minikube/machines/false-503505/false-503505.rawdisk'/>
	      <target dev='hda' bus='virtio'/>
	    </disk>
	    <interface type='network'>
	      <source network='mk-false-503505'/>
	      <model type='virtio'/>
	    </interface>
	    <interface type='network'>
	      <source network='default'/>
	      <model type='virtio'/>
	    </interface>
	    <serial type='pty'>
	      <target port='0'/>
	    </serial>
	    <console type='pty'>
	      <target type='serial' port='0'/>
	    </console>
	    <rng model='virtio'>
	      <backend model='random'>/dev/random</backend>
	    </rng>
	  </devices>
	</domain>
	
	I1220 02:13:54.178932   38979 main.go:144] libmachine: domain false-503505 has defined MAC address 52:54:00:b7:52:73 in network default
	I1220 02:13:54.179675   38979 main.go:144] libmachine: domain false-503505 has defined MAC address 52:54:00:4e:1e:41 in network mk-false-503505
	I1220 02:13:54.179696   38979 main.go:144] libmachine: starting domain...
	I1220 02:13:54.179701   38979 main.go:144] libmachine: ensuring networks are active...
	I1220 02:13:54.180774   38979 main.go:144] libmachine: Ensuring network default is active
	I1220 02:13:54.181409   38979 main.go:144] libmachine: Ensuring network mk-false-503505 is active
	I1220 02:13:54.182238   38979 main.go:144] libmachine: getting domain XML...
	I1220 02:13:54.183538   38979 main.go:144] libmachine: starting domain XML:
	<domain type='kvm'>
	  <name>false-503505</name>
	  <uuid>624dd300-6a99-4c02-9eff-8eb33e6519e9</uuid>
	  <memory unit='KiB'>3145728</memory>
	  <currentMemory unit='KiB'>3145728</currentMemory>
	  <vcpu placement='static'>2</vcpu>
	  <os>
	    <type arch='x86_64' machine='pc-i440fx-noble'>hvm</type>
	    <boot dev='cdrom'/>
	    <boot dev='hd'/>
	    <bootmenu enable='no'/>
	  </os>
	  <features>
	    <acpi/>
	    <apic/>
	    <pae/>
	  </features>
	  <cpu mode='host-passthrough' check='none' migratable='on'/>
	  <clock offset='utc'/>
	  <on_poweroff>destroy</on_poweroff>
	  <on_reboot>restart</on_reboot>
	  <on_crash>destroy</on_crash>
	  <devices>
	    <emulator>/usr/bin/qemu-system-x86_64</emulator>
	    <disk type='file' device='cdrom'>
	      <driver name='qemu' type='raw'/>
	      <source file='/home/minitest/minikube-integration/7cd9f41b7421760cf1f1eaa8725bdb975037b06d-7160/.minikube/machines/false-503505/boot2docker.iso'/>
	      <target dev='hdc' bus='scsi'/>
	      <readonly/>
	      <address type='drive' controller='0' bus='0' target='0' unit='2'/>
	    </disk>
	    <disk type='file' device='disk'>
	      <driver name='qemu' type='raw' io='threads'/>
	      <source file='/home/minitest/minikube-integration/7cd9f41b7421760cf1f1eaa8725bdb975037b06d-7160/.minikube/machines/false-503505/false-503505.rawdisk'/>
	      <target dev='hda' bus='virtio'/>
	      <address type='pci' domain='0x0000' bus='0x00' slot='0x05' function='0x0'/>
	    </disk>
	    <controller type='usb' index='0' model='piix3-uhci'>
	      <address type='pci' domain='0x0000' bus='0x00' slot='0x01' function='0x2'/>
	    </controller>
	    <controller type='pci' index='0' model='pci-root'/>
	    <controller type='scsi' index='0' model='lsilogic'>
	      <address type='pci' domain='0x0000' bus='0x00' slot='0x04' function='0x0'/>
	    </controller>
	    <interface type='network'>
	      <mac address='52:54:00:4e:1e:41'/>
	      <source network='mk-false-503505'/>
	      <model type='virtio'/>
	      <address type='pci' domain='0x0000' bus='0x00' slot='0x02' function='0x0'/>
	    </interface>
	    <interface type='network'>
	      <mac address='52:54:00:b7:52:73'/>
	      <source network='default'/>
	      <model type='virtio'/>
	      <address type='pci' domain='0x0000' bus='0x00' slot='0x03' function='0x0'/>
	    </interface>
	    <serial type='pty'>
	      <target type='isa-serial' port='0'>
	        <model name='isa-serial'/>
	      </target>
	    </serial>
	    <console type='pty'>
	      <target type='serial' port='0'/>
	    </console>
	    <input type='mouse' bus='ps2'/>
	    <input type='keyboard' bus='ps2'/>
	    <audio id='1' type='none'/>
	    <memballoon model='virtio'>
	      <address type='pci' domain='0x0000' bus='0x00' slot='0x06' function='0x0'/>
	    </memballoon>
	    <rng model='virtio'>
	      <backend model='random'>/dev/random</backend>
	      <address type='pci' domain='0x0000' bus='0x00' slot='0x07' function='0x0'/>
	    </rng>
	  </devices>
	</domain>
	
	I1220 02:13:55.365223   38979 main.go:144] libmachine: waiting for domain to start...
	I1220 02:13:55.367393   38979 main.go:144] libmachine: domain is now running
	I1220 02:13:55.367419   38979 main.go:144] libmachine: waiting for IP...
	I1220 02:13:55.368500   38979 main.go:144] libmachine: domain false-503505 has defined MAC address 52:54:00:4e:1e:41 in network mk-false-503505
	I1220 02:13:55.369502   38979 main.go:144] libmachine: no network interface addresses found for domain false-503505 (source=lease)
	I1220 02:13:55.369522   38979 main.go:144] libmachine: trying to list again with source=arp
	I1220 02:13:55.369923   38979 main.go:144] libmachine: unable to find current IP address of domain false-503505 in network mk-false-503505 (interfaces detected: [])
	I1220 02:13:55.369977   38979 retry.go:31] will retry after 247.996373ms: waiting for domain to come up
	I1220 02:13:55.619698   38979 main.go:144] libmachine: domain false-503505 has defined MAC address 52:54:00:4e:1e:41 in network mk-false-503505
	I1220 02:13:55.620501   38979 main.go:144] libmachine: no network interface addresses found for domain false-503505 (source=lease)
	I1220 02:13:55.620524   38979 main.go:144] libmachine: trying to list again with source=arp
	I1220 02:13:55.620981   38979 main.go:144] libmachine: unable to find current IP address of domain false-503505 in network mk-false-503505 (interfaces detected: [])
	I1220 02:13:55.621018   38979 retry.go:31] will retry after 253.163992ms: waiting for domain to come up
	I1220 02:13:55.875623   38979 main.go:144] libmachine: domain false-503505 has defined MAC address 52:54:00:4e:1e:41 in network mk-false-503505
	I1220 02:13:55.876522   38979 main.go:144] libmachine: no network interface addresses found for domain false-503505 (source=lease)
	I1220 02:13:55.876543   38979 main.go:144] libmachine: trying to list again with source=arp
	I1220 02:13:55.876997   38979 main.go:144] libmachine: unable to find current IP address of domain false-503505 in network mk-false-503505 (interfaces detected: [])
	I1220 02:13:55.877034   38979 retry.go:31] will retry after 322.078046ms: waiting for domain to come up
	I1220 02:13:56.200749   38979 main.go:144] libmachine: domain false-503505 has defined MAC address 52:54:00:4e:1e:41 in network mk-false-503505
	I1220 02:13:56.201573   38979 main.go:144] libmachine: no network interface addresses found for domain false-503505 (source=lease)
	I1220 02:13:56.201590   38979 main.go:144] libmachine: trying to list again with source=arp
	I1220 02:13:56.201993   38979 main.go:144] libmachine: unable to find current IP address of domain false-503505 in network mk-false-503505 (interfaces detected: [])
	I1220 02:13:56.202032   38979 retry.go:31] will retry after 398.279098ms: waiting for domain to come up
	I1220 02:13:56.601723   38979 main.go:144] libmachine: domain false-503505 has defined MAC address 52:54:00:4e:1e:41 in network mk-false-503505
	I1220 02:13:56.602519   38979 main.go:144] libmachine: no network interface addresses found for domain false-503505 (source=lease)
	I1220 02:13:56.602554   38979 main.go:144] libmachine: trying to list again with source=arp
	I1220 02:13:56.603065   38979 main.go:144] libmachine: unable to find current IP address of domain false-503505 in network mk-false-503505 (interfaces detected: [])
	I1220 02:13:56.603103   38979 retry.go:31] will retry after 668.508453ms: waiting for domain to come up
	I1220 02:13:57.272883   38979 main.go:144] libmachine: domain false-503505 has defined MAC address 52:54:00:4e:1e:41 in network mk-false-503505
	I1220 02:13:57.273735   38979 main.go:144] libmachine: no network interface addresses found for domain false-503505 (source=lease)
	I1220 02:13:57.273763   38979 main.go:144] libmachine: trying to list again with source=arp
	I1220 02:13:57.274179   38979 main.go:144] libmachine: unable to find current IP address of domain false-503505 in network mk-false-503505 (interfaces detected: [])
	I1220 02:13:57.274223   38979 retry.go:31] will retry after 936.48012ms: waiting for domain to come up
	I1220 02:13:58.212951   38979 main.go:144] libmachine: domain false-503505 has defined MAC address 52:54:00:4e:1e:41 in network mk-false-503505
	I1220 02:13:58.213934   38979 main.go:144] libmachine: no network interface addresses found for domain false-503505 (source=lease)
	I1220 02:13:58.213955   38979 main.go:144] libmachine: trying to list again with source=arp
	I1220 02:13:58.214490   38979 main.go:144] libmachine: unable to find current IP address of domain false-503505 in network mk-false-503505 (interfaces detected: [])
	I1220 02:13:58.214540   38979 retry.go:31] will retry after 1.101549544s: waiting for domain to come up
	W1220 02:13:59.093909   37762 node_ready.go:57] node "calico-503505" has "Ready":"False" status (will retry)
	I1220 02:14:00.089963   37762 node_ready.go:49] node "calico-503505" is "Ready"
	I1220 02:14:00.090003   37762 node_ready.go:38] duration metric: took 9.504754397s for node "calico-503505" to be "Ready" ...
	I1220 02:14:00.090027   37762 api_server.go:52] waiting for apiserver process to appear ...
	I1220 02:14:00.090096   37762 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1220 02:14:00.121908   37762 api_server.go:72] duration metric: took 11.479258368s to wait for apiserver process to appear ...
	I1220 02:14:00.121945   37762 api_server.go:88] waiting for apiserver healthz status ...
	I1220 02:14:00.121968   37762 api_server.go:253] Checking apiserver healthz at https://192.168.39.226:8443/healthz ...
	I1220 02:14:00.133024   37762 api_server.go:279] https://192.168.39.226:8443/healthz returned 200:
	ok
	I1220 02:14:00.134499   37762 api_server.go:141] control plane version: v1.34.3
	I1220 02:14:00.134533   37762 api_server.go:131] duration metric: took 12.580039ms to wait for apiserver health ...
	I1220 02:14:00.134544   37762 system_pods.go:43] waiting for kube-system pods to appear ...
	I1220 02:14:00.143085   37762 system_pods.go:59] 9 kube-system pods found
	I1220 02:14:00.143143   37762 system_pods.go:61] "calico-kube-controllers-5c676f698c-5plhl" [8be42fe3-d58c-4bbf-9c51-8b689e28f671] Pending / Ready:ContainersNotReady (containers with unready status: [calico-kube-controllers]) / ContainersReady:ContainersNotReady (containers with unready status: [calico-kube-controllers])
	I1220 02:14:00.143160   37762 system_pods.go:61] "calico-node-d2xvk" [ebf1dd8e-ce6b-4cf7-a858-755e70edd320] Pending / Initialized:ContainersNotInitialized (containers with incomplete status: [ebpf-bootstrap]) / Ready:ContainersNotReady (containers with unready status: [calico-node]) / ContainersReady:ContainersNotReady (containers with unready status: [calico-node])
	I1220 02:14:00.143171   37762 system_pods.go:61] "coredns-66bc5c9577-hd2kg" [6f327eef-5be3-4358-8bf0-be3e0e9a13f1] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1220 02:14:00.143177   37762 system_pods.go:61] "etcd-calico-503505" [7fed5e30-e972-43f8-9ef7-7261b7216d7c] Running
	I1220 02:14:00.143183   37762 system_pods.go:61] "kube-apiserver-calico-503505" [b81cb7dd-efa9-428d-b155-4c8d4fcb5566] Running
	I1220 02:14:00.143188   37762 system_pods.go:61] "kube-controller-manager-calico-503505" [4a7a0d88-4103-46e7-8090-72b0e9d91c39] Running
	I1220 02:14:00.143194   37762 system_pods.go:61] "kube-proxy-gzr82" [6a0b0ea5-98c0-4762-9423-1c10dee4576e] Running
	I1220 02:14:00.143219   37762 system_pods.go:61] "kube-scheduler-calico-503505" [ecad390b-bd72-4f11-82fd-6544060a23c7] Running
	I1220 02:14:00.143233   37762 system_pods.go:61] "storage-provisioner" [742d00f3-5d72-488a-afa1-1fcd40398cf6] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I1220 02:14:00.143243   37762 system_pods.go:74] duration metric: took 8.690731ms to wait for pod list to return data ...
	I1220 02:14:00.143254   37762 default_sa.go:34] waiting for default service account to be created ...
	I1220 02:14:00.147300   37762 default_sa.go:45] found service account: "default"
	I1220 02:14:00.147335   37762 default_sa.go:55] duration metric: took 4.072144ms for default service account to be created ...
	I1220 02:14:00.147349   37762 system_pods.go:116] waiting for k8s-apps to be running ...
	I1220 02:14:00.153827   37762 system_pods.go:86] 9 kube-system pods found
	I1220 02:14:00.153869   37762 system_pods.go:89] "calico-kube-controllers-5c676f698c-5plhl" [8be42fe3-d58c-4bbf-9c51-8b689e28f671] Pending / Ready:ContainersNotReady (containers with unready status: [calico-kube-controllers]) / ContainersReady:ContainersNotReady (containers with unready status: [calico-kube-controllers])
	I1220 02:14:00.153882   37762 system_pods.go:89] "calico-node-d2xvk" [ebf1dd8e-ce6b-4cf7-a858-755e70edd320] Pending / Initialized:ContainersNotInitialized (containers with incomplete status: [ebpf-bootstrap]) / Ready:ContainersNotReady (containers with unready status: [calico-node]) / ContainersReady:ContainersNotReady (containers with unready status: [calico-node])
	I1220 02:14:00.153892   37762 system_pods.go:89] "coredns-66bc5c9577-hd2kg" [6f327eef-5be3-4358-8bf0-be3e0e9a13f1] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1220 02:14:00.153900   37762 system_pods.go:89] "etcd-calico-503505" [7fed5e30-e972-43f8-9ef7-7261b7216d7c] Running
	I1220 02:14:00.153907   37762 system_pods.go:89] "kube-apiserver-calico-503505" [b81cb7dd-efa9-428d-b155-4c8d4fcb5566] Running
	I1220 02:14:00.153911   37762 system_pods.go:89] "kube-controller-manager-calico-503505" [4a7a0d88-4103-46e7-8090-72b0e9d91c39] Running
	I1220 02:14:00.153917   37762 system_pods.go:89] "kube-proxy-gzr82" [6a0b0ea5-98c0-4762-9423-1c10dee4576e] Running
	I1220 02:14:00.153922   37762 system_pods.go:89] "kube-scheduler-calico-503505" [ecad390b-bd72-4f11-82fd-6544060a23c7] Running
	I1220 02:14:00.153930   37762 system_pods.go:89] "storage-provisioner" [742d00f3-5d72-488a-afa1-1fcd40398cf6] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I1220 02:14:00.153953   37762 retry.go:31] will retry after 191.011989ms: missing components: kube-dns
	I1220 02:14:00.353588   37762 system_pods.go:86] 9 kube-system pods found
	I1220 02:14:00.353638   37762 system_pods.go:89] "calico-kube-controllers-5c676f698c-5plhl" [8be42fe3-d58c-4bbf-9c51-8b689e28f671] Pending / Ready:ContainersNotReady (containers with unready status: [calico-kube-controllers]) / ContainersReady:ContainersNotReady (containers with unready status: [calico-kube-controllers])
	I1220 02:14:00.353652   37762 system_pods.go:89] "calico-node-d2xvk" [ebf1dd8e-ce6b-4cf7-a858-755e70edd320] Pending / Initialized:ContainersNotInitialized (containers with incomplete status: [ebpf-bootstrap]) / Ready:ContainersNotReady (containers with unready status: [calico-node]) / ContainersReady:ContainersNotReady (containers with unready status: [calico-node])
	I1220 02:14:00.353665   37762 system_pods.go:89] "coredns-66bc5c9577-hd2kg" [6f327eef-5be3-4358-8bf0-be3e0e9a13f1] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1220 02:14:00.353673   37762 system_pods.go:89] "etcd-calico-503505" [7fed5e30-e972-43f8-9ef7-7261b7216d7c] Running
	I1220 02:14:00.353681   37762 system_pods.go:89] "kube-apiserver-calico-503505" [b81cb7dd-efa9-428d-b155-4c8d4fcb5566] Running
	I1220 02:14:00.353688   37762 system_pods.go:89] "kube-controller-manager-calico-503505" [4a7a0d88-4103-46e7-8090-72b0e9d91c39] Running
	I1220 02:14:00.353696   37762 system_pods.go:89] "kube-proxy-gzr82" [6a0b0ea5-98c0-4762-9423-1c10dee4576e] Running
	I1220 02:14:00.353702   37762 system_pods.go:89] "kube-scheduler-calico-503505" [ecad390b-bd72-4f11-82fd-6544060a23c7] Running
	I1220 02:14:00.353710   37762 system_pods.go:89] "storage-provisioner" [742d00f3-5d72-488a-afa1-1fcd40398cf6] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I1220 02:14:00.353731   37762 retry.go:31] will retry after 332.593015ms: missing components: kube-dns
	I1220 02:14:00.697960   37762 system_pods.go:86] 9 kube-system pods found
	I1220 02:14:00.698016   37762 system_pods.go:89] "calico-kube-controllers-5c676f698c-5plhl" [8be42fe3-d58c-4bbf-9c51-8b689e28f671] Pending / Ready:ContainersNotReady (containers with unready status: [calico-kube-controllers]) / ContainersReady:ContainersNotReady (containers with unready status: [calico-kube-controllers])
	I1220 02:14:00.698032   37762 system_pods.go:89] "calico-node-d2xvk" [ebf1dd8e-ce6b-4cf7-a858-755e70edd320] Pending / Initialized:ContainersNotInitialized (containers with incomplete status: [ebpf-bootstrap]) / Ready:ContainersNotReady (containers with unready status: [calico-node]) / ContainersReady:ContainersNotReady (containers with unready status: [calico-node])
	I1220 02:14:00.698045   37762 system_pods.go:89] "coredns-66bc5c9577-hd2kg" [6f327eef-5be3-4358-8bf0-be3e0e9a13f1] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1220 02:14:00.698051   37762 system_pods.go:89] "etcd-calico-503505" [7fed5e30-e972-43f8-9ef7-7261b7216d7c] Running
	I1220 02:14:00.698057   37762 system_pods.go:89] "kube-apiserver-calico-503505" [b81cb7dd-efa9-428d-b155-4c8d4fcb5566] Running
	I1220 02:14:00.698062   37762 system_pods.go:89] "kube-controller-manager-calico-503505" [4a7a0d88-4103-46e7-8090-72b0e9d91c39] Running
	I1220 02:14:00.698068   37762 system_pods.go:89] "kube-proxy-gzr82" [6a0b0ea5-98c0-4762-9423-1c10dee4576e] Running
	I1220 02:14:00.698073   37762 system_pods.go:89] "kube-scheduler-calico-503505" [ecad390b-bd72-4f11-82fd-6544060a23c7] Running
	I1220 02:14:00.698080   37762 system_pods.go:89] "storage-provisioner" [742d00f3-5d72-488a-afa1-1fcd40398cf6] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I1220 02:14:00.698098   37762 retry.go:31] will retry after 441.450882ms: missing components: kube-dns
	I1220 02:14:01.147620   37762 system_pods.go:86] 9 kube-system pods found
	I1220 02:14:01.147663   37762 system_pods.go:89] "calico-kube-controllers-5c676f698c-5plhl" [8be42fe3-d58c-4bbf-9c51-8b689e28f671] Pending / Ready:ContainersNotReady (containers with unready status: [calico-kube-controllers]) / ContainersReady:ContainersNotReady (containers with unready status: [calico-kube-controllers])
	I1220 02:14:01.147675   37762 system_pods.go:89] "calico-node-d2xvk" [ebf1dd8e-ce6b-4cf7-a858-755e70edd320] Pending / Initialized:ContainersNotInitialized (containers with incomplete status: [ebpf-bootstrap]) / Ready:ContainersNotReady (containers with unready status: [calico-node]) / ContainersReady:ContainersNotReady (containers with unready status: [calico-node])
	I1220 02:14:01.147685   37762 system_pods.go:89] "coredns-66bc5c9577-hd2kg" [6f327eef-5be3-4358-8bf0-be3e0e9a13f1] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1220 02:14:01.147690   37762 system_pods.go:89] "etcd-calico-503505" [7fed5e30-e972-43f8-9ef7-7261b7216d7c] Running
	I1220 02:14:01.147697   37762 system_pods.go:89] "kube-apiserver-calico-503505" [b81cb7dd-efa9-428d-b155-4c8d4fcb5566] Running
	I1220 02:14:01.147702   37762 system_pods.go:89] "kube-controller-manager-calico-503505" [4a7a0d88-4103-46e7-8090-72b0e9d91c39] Running
	I1220 02:14:01.147707   37762 system_pods.go:89] "kube-proxy-gzr82" [6a0b0ea5-98c0-4762-9423-1c10dee4576e] Running
	I1220 02:14:01.147711   37762 system_pods.go:89] "kube-scheduler-calico-503505" [ecad390b-bd72-4f11-82fd-6544060a23c7] Running
	I1220 02:14:01.147718   37762 system_pods.go:89] "storage-provisioner" [742d00f3-5d72-488a-afa1-1fcd40398cf6] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I1220 02:14:01.147737   37762 retry.go:31] will retry after 398.996064ms: missing components: kube-dns
	I1220 02:14:01.555710   37762 system_pods.go:86] 9 kube-system pods found
	I1220 02:14:01.555752   37762 system_pods.go:89] "calico-kube-controllers-5c676f698c-5plhl" [8be42fe3-d58c-4bbf-9c51-8b689e28f671] Pending / Ready:ContainersNotReady (containers with unready status: [calico-kube-controllers]) / ContainersReady:ContainersNotReady (containers with unready status: [calico-kube-controllers])
	I1220 02:14:01.555764   37762 system_pods.go:89] "calico-node-d2xvk" [ebf1dd8e-ce6b-4cf7-a858-755e70edd320] Pending / Initialized:ContainersNotInitialized (containers with incomplete status: [ebpf-bootstrap]) / Ready:ContainersNotReady (containers with unready status: [calico-node]) / ContainersReady:ContainersNotReady (containers with unready status: [calico-node])
	I1220 02:14:01.555774   37762 system_pods.go:89] "coredns-66bc5c9577-hd2kg" [6f327eef-5be3-4358-8bf0-be3e0e9a13f1] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1220 02:14:01.555779   37762 system_pods.go:89] "etcd-calico-503505" [7fed5e30-e972-43f8-9ef7-7261b7216d7c] Running
	I1220 02:14:01.555786   37762 system_pods.go:89] "kube-apiserver-calico-503505" [b81cb7dd-efa9-428d-b155-4c8d4fcb5566] Running
	I1220 02:14:01.555791   37762 system_pods.go:89] "kube-controller-manager-calico-503505" [4a7a0d88-4103-46e7-8090-72b0e9d91c39] Running
	I1220 02:14:01.555797   37762 system_pods.go:89] "kube-proxy-gzr82" [6a0b0ea5-98c0-4762-9423-1c10dee4576e] Running
	I1220 02:14:01.555802   37762 system_pods.go:89] "kube-scheduler-calico-503505" [ecad390b-bd72-4f11-82fd-6544060a23c7] Running
	I1220 02:14:01.555813   37762 system_pods.go:89] "storage-provisioner" [742d00f3-5d72-488a-afa1-1fcd40398cf6] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I1220 02:14:01.555831   37762 retry.go:31] will retry after 742.519055ms: missing components: kube-dns
	I1220 02:14:02.306002   37762 system_pods.go:86] 9 kube-system pods found
	I1220 02:14:02.306049   37762 system_pods.go:89] "calico-kube-controllers-5c676f698c-5plhl" [8be42fe3-d58c-4bbf-9c51-8b689e28f671] Pending / Ready:ContainersNotReady (containers with unready status: [calico-kube-controllers]) / ContainersReady:ContainersNotReady (containers with unready status: [calico-kube-controllers])
	I1220 02:14:02.306068   37762 system_pods.go:89] "calico-node-d2xvk" [ebf1dd8e-ce6b-4cf7-a858-755e70edd320] Pending / Initialized:ContainersNotInitialized (containers with incomplete status: [ebpf-bootstrap]) / Ready:ContainersNotReady (containers with unready status: [calico-node]) / ContainersReady:ContainersNotReady (containers with unready status: [calico-node])
	I1220 02:14:02.306080   37762 system_pods.go:89] "coredns-66bc5c9577-hd2kg" [6f327eef-5be3-4358-8bf0-be3e0e9a13f1] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1220 02:14:02.306088   37762 system_pods.go:89] "etcd-calico-503505" [7fed5e30-e972-43f8-9ef7-7261b7216d7c] Running
	I1220 02:14:02.306097   37762 system_pods.go:89] "kube-apiserver-calico-503505" [b81cb7dd-efa9-428d-b155-4c8d4fcb5566] Running
	I1220 02:14:02.306102   37762 system_pods.go:89] "kube-controller-manager-calico-503505" [4a7a0d88-4103-46e7-8090-72b0e9d91c39] Running
	I1220 02:14:02.306109   37762 system_pods.go:89] "kube-proxy-gzr82" [6a0b0ea5-98c0-4762-9423-1c10dee4576e] Running
	I1220 02:14:02.306114   37762 system_pods.go:89] "kube-scheduler-calico-503505" [ecad390b-bd72-4f11-82fd-6544060a23c7] Running
	I1220 02:14:02.306119   37762 system_pods.go:89] "storage-provisioner" [742d00f3-5d72-488a-afa1-1fcd40398cf6] Running
	I1220 02:14:02.306141   37762 retry.go:31] will retry after 687.588334ms: missing components: kube-dns
	I1220 02:14:01.475480   37878 kubeadm.go:319] [control-plane-check] kube-controller-manager is healthy after 4.023883563s
	I1220 02:13:59.318088   38979 main.go:144] libmachine: domain false-503505 has defined MAC address 52:54:00:4e:1e:41 in network mk-false-503505
	I1220 02:13:59.319169   38979 main.go:144] libmachine: no network interface addresses found for domain false-503505 (source=lease)
	I1220 02:13:59.319195   38979 main.go:144] libmachine: trying to list again with source=arp
	I1220 02:13:59.319707   38979 main.go:144] libmachine: unable to find current IP address of domain false-503505 in network mk-false-503505 (interfaces detected: [])
	I1220 02:13:59.319759   38979 retry.go:31] will retry after 1.133836082s: waiting for domain to come up
	I1220 02:14:00.455752   38979 main.go:144] libmachine: domain false-503505 has defined MAC address 52:54:00:4e:1e:41 in network mk-false-503505
	I1220 02:14:00.457000   38979 main.go:144] libmachine: no network interface addresses found for domain false-503505 (source=lease)
	I1220 02:14:00.457032   38979 main.go:144] libmachine: trying to list again with source=arp
	I1220 02:14:00.457642   38979 main.go:144] libmachine: unable to find current IP address of domain false-503505 in network mk-false-503505 (interfaces detected: [])
	I1220 02:14:00.457696   38979 retry.go:31] will retry after 1.689205474s: waiting for domain to come up
	I1220 02:14:02.149657   38979 main.go:144] libmachine: domain false-503505 has defined MAC address 52:54:00:4e:1e:41 in network mk-false-503505
	I1220 02:14:02.150579   38979 main.go:144] libmachine: no network interface addresses found for domain false-503505 (source=lease)
	I1220 02:14:02.150669   38979 main.go:144] libmachine: trying to list again with source=arp
	I1220 02:14:02.151167   38979 main.go:144] libmachine: unable to find current IP address of domain false-503505 in network mk-false-503505 (interfaces detected: [])
	I1220 02:14:02.151218   38979 retry.go:31] will retry after 1.402452731s: waiting for domain to come up
	I1220 02:14:03.555309   38979 main.go:144] libmachine: domain false-503505 has defined MAC address 52:54:00:4e:1e:41 in network mk-false-503505
	I1220 02:14:03.556319   38979 main.go:144] libmachine: no network interface addresses found for domain false-503505 (source=lease)
	I1220 02:14:03.556389   38979 main.go:144] libmachine: trying to list again with source=arp
	I1220 02:14:03.556908   38979 main.go:144] libmachine: unable to find current IP address of domain false-503505 in network mk-false-503505 (interfaces detected: [])
	I1220 02:14:03.556948   38979 retry.go:31] will retry after 2.79303956s: waiting for domain to come up
	I1220 02:14:03.304845   37878 kubeadm.go:319] [control-plane-check] kube-scheduler is healthy after 5.854389668s
	I1220 02:14:04.452000   37878 kubeadm.go:319] [control-plane-check] kube-apiserver is healthy after 7.001670897s
	I1220 02:14:04.482681   37878 kubeadm.go:319] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I1220 02:14:04.509128   37878 kubeadm.go:319] [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I1220 02:14:04.533960   37878 kubeadm.go:319] [upload-certs] Skipping phase. Please see --upload-certs
	I1220 02:14:04.534255   37878 kubeadm.go:319] [mark-control-plane] Marking the node custom-flannel-503505 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I1220 02:14:04.549617   37878 kubeadm.go:319] [bootstrap-token] Using token: 5feew1.aaci0na7tzxpkq74
	I1220 02:14:04.551043   37878 out.go:252]   - Configuring RBAC rules ...
	I1220 02:14:04.551218   37878 kubeadm.go:319] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I1220 02:14:04.561847   37878 kubeadm.go:319] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I1220 02:14:04.591000   37878 kubeadm.go:319] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I1220 02:14:04.597908   37878 kubeadm.go:319] [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I1220 02:14:04.606680   37878 kubeadm.go:319] [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I1220 02:14:04.614933   37878 kubeadm.go:319] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I1220 02:14:04.862442   37878 kubeadm.go:319] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I1220 02:14:05.356740   37878 kubeadm.go:319] [addons] Applied essential addon: CoreDNS
	I1220 02:14:05.862025   37878 kubeadm.go:319] [addons] Applied essential addon: kube-proxy
	I1220 02:14:05.865061   37878 kubeadm.go:319] 
	I1220 02:14:05.865156   37878 kubeadm.go:319] Your Kubernetes control-plane has initialized successfully!
	I1220 02:14:05.865168   37878 kubeadm.go:319] 
	I1220 02:14:05.865282   37878 kubeadm.go:319] To start using your cluster, you need to run the following as a regular user:
	I1220 02:14:05.865294   37878 kubeadm.go:319] 
	I1220 02:14:05.865359   37878 kubeadm.go:319]   mkdir -p $HOME/.kube
	I1220 02:14:05.865464   37878 kubeadm.go:319]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I1220 02:14:05.865569   37878 kubeadm.go:319]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I1220 02:14:05.865593   37878 kubeadm.go:319] 
	I1220 02:14:05.865675   37878 kubeadm.go:319] Alternatively, if you are the root user, you can run:
	I1220 02:14:05.865685   37878 kubeadm.go:319] 
	I1220 02:14:05.865781   37878 kubeadm.go:319]   export KUBECONFIG=/etc/kubernetes/admin.conf
	I1220 02:14:05.865795   37878 kubeadm.go:319] 
	I1220 02:14:05.865876   37878 kubeadm.go:319] You should now deploy a pod network to the cluster.
	I1220 02:14:05.865983   37878 kubeadm.go:319] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I1220 02:14:05.866079   37878 kubeadm.go:319]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I1220 02:14:05.866085   37878 kubeadm.go:319] 
	I1220 02:14:05.866221   37878 kubeadm.go:319] You can now join any number of control-plane nodes by copying certificate authorities
	I1220 02:14:05.866332   37878 kubeadm.go:319] and service account keys on each node and then running the following as root:
	I1220 02:14:05.866337   37878 kubeadm.go:319] 
	I1220 02:14:05.866459   37878 kubeadm.go:319]   kubeadm join control-plane.minikube.internal:8443 --token 5feew1.aaci0na7tzxpkq74 \
	I1220 02:14:05.866573   37878 kubeadm.go:319] 	--discovery-token-ca-cert-hash sha256:34b132c11c5a062e0480b441f2caac7fcba372b596da4b2c80fd8c00c74704a7 \
	I1220 02:14:05.866595   37878 kubeadm.go:319] 	--control-plane 
	I1220 02:14:05.866599   37878 kubeadm.go:319] 
	I1220 02:14:05.866684   37878 kubeadm.go:319] Then you can join any number of worker nodes by running the following on each as root:
	I1220 02:14:05.866688   37878 kubeadm.go:319] 
	I1220 02:14:05.866779   37878 kubeadm.go:319] kubeadm join control-plane.minikube.internal:8443 --token 5feew1.aaci0na7tzxpkq74 \
	I1220 02:14:05.866902   37878 kubeadm.go:319] 	--discovery-token-ca-cert-hash sha256:34b132c11c5a062e0480b441f2caac7fcba372b596da4b2c80fd8c00c74704a7 
	I1220 02:14:05.869888   37878 kubeadm.go:319] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I1220 02:14:05.869959   37878 cni.go:84] Creating CNI manager for "testdata/kube-flannel.yaml"
	I1220 02:14:05.871868   37878 out.go:179] * Configuring testdata/kube-flannel.yaml (Container Networking Interface) ...
	I1220 02:14:03.004292   37762 system_pods.go:86] 9 kube-system pods found
	I1220 02:14:03.004339   37762 system_pods.go:89] "calico-kube-controllers-5c676f698c-5plhl" [8be42fe3-d58c-4bbf-9c51-8b689e28f671] Pending / Ready:ContainersNotReady (containers with unready status: [calico-kube-controllers]) / ContainersReady:ContainersNotReady (containers with unready status: [calico-kube-controllers])
	I1220 02:14:03.004352   37762 system_pods.go:89] "calico-node-d2xvk" [ebf1dd8e-ce6b-4cf7-a858-755e70edd320] Pending / Initialized:ContainersNotInitialized (containers with incomplete status: [ebpf-bootstrap]) / Ready:ContainersNotReady (containers with unready status: [calico-node]) / ContainersReady:ContainersNotReady (containers with unready status: [calico-node])
	I1220 02:14:03.004361   37762 system_pods.go:89] "coredns-66bc5c9577-hd2kg" [6f327eef-5be3-4358-8bf0-be3e0e9a13f1] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1220 02:14:03.004367   37762 system_pods.go:89] "etcd-calico-503505" [7fed5e30-e972-43f8-9ef7-7261b7216d7c] Running
	I1220 02:14:03.004374   37762 system_pods.go:89] "kube-apiserver-calico-503505" [b81cb7dd-efa9-428d-b155-4c8d4fcb5566] Running
	I1220 02:14:03.004379   37762 system_pods.go:89] "kube-controller-manager-calico-503505" [4a7a0d88-4103-46e7-8090-72b0e9d91c39] Running
	I1220 02:14:03.004384   37762 system_pods.go:89] "kube-proxy-gzr82" [6a0b0ea5-98c0-4762-9423-1c10dee4576e] Running
	I1220 02:14:03.004389   37762 system_pods.go:89] "kube-scheduler-calico-503505" [ecad390b-bd72-4f11-82fd-6544060a23c7] Running
	I1220 02:14:03.004394   37762 system_pods.go:89] "storage-provisioner" [742d00f3-5d72-488a-afa1-1fcd40398cf6] Running
	I1220 02:14:03.004412   37762 retry.go:31] will retry after 732.081748ms: missing components: kube-dns
	I1220 02:14:03.744119   37762 system_pods.go:86] 9 kube-system pods found
	I1220 02:14:03.744161   37762 system_pods.go:89] "calico-kube-controllers-5c676f698c-5plhl" [8be42fe3-d58c-4bbf-9c51-8b689e28f671] Pending / Ready:ContainersNotReady (containers with unready status: [calico-kube-controllers]) / ContainersReady:ContainersNotReady (containers with unready status: [calico-kube-controllers])
	I1220 02:14:03.744175   37762 system_pods.go:89] "calico-node-d2xvk" [ebf1dd8e-ce6b-4cf7-a858-755e70edd320] Pending / Initialized:ContainersNotInitialized (containers with incomplete status: [ebpf-bootstrap]) / Ready:ContainersNotReady (containers with unready status: [calico-node]) / ContainersReady:ContainersNotReady (containers with unready status: [calico-node])
	I1220 02:14:03.744185   37762 system_pods.go:89] "coredns-66bc5c9577-hd2kg" [6f327eef-5be3-4358-8bf0-be3e0e9a13f1] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1220 02:14:03.744191   37762 system_pods.go:89] "etcd-calico-503505" [7fed5e30-e972-43f8-9ef7-7261b7216d7c] Running
	I1220 02:14:03.744214   37762 system_pods.go:89] "kube-apiserver-calico-503505" [b81cb7dd-efa9-428d-b155-4c8d4fcb5566] Running
	I1220 02:14:03.744221   37762 system_pods.go:89] "kube-controller-manager-calico-503505" [4a7a0d88-4103-46e7-8090-72b0e9d91c39] Running
	I1220 02:14:03.744227   37762 system_pods.go:89] "kube-proxy-gzr82" [6a0b0ea5-98c0-4762-9423-1c10dee4576e] Running
	I1220 02:14:03.744232   37762 system_pods.go:89] "kube-scheduler-calico-503505" [ecad390b-bd72-4f11-82fd-6544060a23c7] Running
	I1220 02:14:03.744241   37762 system_pods.go:89] "storage-provisioner" [742d00f3-5d72-488a-afa1-1fcd40398cf6] Running
	I1220 02:14:03.744273   37762 retry.go:31] will retry after 1.276813322s: missing components: kube-dns
	I1220 02:14:05.030079   37762 system_pods.go:86] 9 kube-system pods found
	I1220 02:14:05.030129   37762 system_pods.go:89] "calico-kube-controllers-5c676f698c-5plhl" [8be42fe3-d58c-4bbf-9c51-8b689e28f671] Pending / Ready:ContainersNotReady (containers with unready status: [calico-kube-controllers]) / ContainersReady:ContainersNotReady (containers with unready status: [calico-kube-controllers])
	I1220 02:14:05.030146   37762 system_pods.go:89] "calico-node-d2xvk" [ebf1dd8e-ce6b-4cf7-a858-755e70edd320] Pending / Initialized:ContainersNotInitialized (containers with incomplete status: [ebpf-bootstrap]) / Ready:ContainersNotReady (containers with unready status: [calico-node]) / ContainersReady:ContainersNotReady (containers with unready status: [calico-node])
	I1220 02:14:05.030161   37762 system_pods.go:89] "coredns-66bc5c9577-hd2kg" [6f327eef-5be3-4358-8bf0-be3e0e9a13f1] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1220 02:14:05.030168   37762 system_pods.go:89] "etcd-calico-503505" [7fed5e30-e972-43f8-9ef7-7261b7216d7c] Running
	I1220 02:14:05.030187   37762 system_pods.go:89] "kube-apiserver-calico-503505" [b81cb7dd-efa9-428d-b155-4c8d4fcb5566] Running
	I1220 02:14:05.030194   37762 system_pods.go:89] "kube-controller-manager-calico-503505" [4a7a0d88-4103-46e7-8090-72b0e9d91c39] Running
	I1220 02:14:05.030221   37762 system_pods.go:89] "kube-proxy-gzr82" [6a0b0ea5-98c0-4762-9423-1c10dee4576e] Running
	I1220 02:14:05.030229   37762 system_pods.go:89] "kube-scheduler-calico-503505" [ecad390b-bd72-4f11-82fd-6544060a23c7] Running
	I1220 02:14:05.030235   37762 system_pods.go:89] "storage-provisioner" [742d00f3-5d72-488a-afa1-1fcd40398cf6] Running
	I1220 02:14:05.030257   37762 retry.go:31] will retry after 1.238453929s: missing components: kube-dns
	I1220 02:14:06.275974   37762 system_pods.go:86] 9 kube-system pods found
	I1220 02:14:06.276021   37762 system_pods.go:89] "calico-kube-controllers-5c676f698c-5plhl" [8be42fe3-d58c-4bbf-9c51-8b689e28f671] Pending / Ready:ContainersNotReady (containers with unready status: [calico-kube-controllers]) / ContainersReady:ContainersNotReady (containers with unready status: [calico-kube-controllers])
	I1220 02:14:06.276033   37762 system_pods.go:89] "calico-node-d2xvk" [ebf1dd8e-ce6b-4cf7-a858-755e70edd320] Pending / Initialized:ContainersNotInitialized (containers with incomplete status: [ebpf-bootstrap]) / Ready:ContainersNotReady (containers with unready status: [calico-node]) / ContainersReady:ContainersNotReady (containers with unready status: [calico-node])
	I1220 02:14:06.276049   37762 system_pods.go:89] "coredns-66bc5c9577-hd2kg" [6f327eef-5be3-4358-8bf0-be3e0e9a13f1] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1220 02:14:06.276055   37762 system_pods.go:89] "etcd-calico-503505" [7fed5e30-e972-43f8-9ef7-7261b7216d7c] Running
	I1220 02:14:06.276061   37762 system_pods.go:89] "kube-apiserver-calico-503505" [b81cb7dd-efa9-428d-b155-4c8d4fcb5566] Running
	I1220 02:14:06.276066   37762 system_pods.go:89] "kube-controller-manager-calico-503505" [4a7a0d88-4103-46e7-8090-72b0e9d91c39] Running
	I1220 02:14:06.276077   37762 system_pods.go:89] "kube-proxy-gzr82" [6a0b0ea5-98c0-4762-9423-1c10dee4576e] Running
	I1220 02:14:06.276083   37762 system_pods.go:89] "kube-scheduler-calico-503505" [ecad390b-bd72-4f11-82fd-6544060a23c7] Running
	I1220 02:14:06.276087   37762 system_pods.go:89] "storage-provisioner" [742d00f3-5d72-488a-afa1-1fcd40398cf6] Running
	I1220 02:14:06.276106   37762 retry.go:31] will retry after 1.908248969s: missing components: kube-dns
	I1220 02:14:05.873406   37878 cni.go:182] applying CNI manifest using /var/lib/minikube/binaries/v1.34.3/kubectl ...
	I1220 02:14:05.873469   37878 ssh_runner.go:195] Run: stat -c "%s %y" /var/tmp/minikube/cni.yaml
	I1220 02:14:05.881393   37878 ssh_runner.go:352] existence check for /var/tmp/minikube/cni.yaml: stat -c "%s %y" /var/tmp/minikube/cni.yaml: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/tmp/minikube/cni.yaml': No such file or directory
	I1220 02:14:05.881431   37878 ssh_runner.go:362] scp testdata/kube-flannel.yaml --> /var/tmp/minikube/cni.yaml (4578 bytes)
	I1220 02:14:05.936780   37878 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.3/kubectl apply --kubeconfig=/var/lib/minikube/kubeconfig -f /var/tmp/minikube/cni.yaml
	I1220 02:14:06.396862   37878 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.3/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I1220 02:14:06.396880   37878 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I1220 02:14:06.396862   37878 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.3/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes custom-flannel-503505 minikube.k8s.io/updated_at=2025_12_20T02_14_06_0700 minikube.k8s.io/version=v1.37.0 minikube.k8s.io/commit=7cd9f41b7421760cf1f1eaa8725bdb975037b06d minikube.k8s.io/name=custom-flannel-503505 minikube.k8s.io/primary=true
	I1220 02:14:06.630781   37878 ops.go:34] apiserver oom_adj: -16
	I1220 02:14:06.630941   37878 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1220 02:14:07.131072   37878 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1220 02:14:07.631526   37878 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1220 02:14:06.351650   38979 main.go:144] libmachine: domain false-503505 has defined MAC address 52:54:00:4e:1e:41 in network mk-false-503505
	I1220 02:14:06.352735   38979 main.go:144] libmachine: no network interface addresses found for domain false-503505 (source=lease)
	I1220 02:14:06.352774   38979 main.go:144] libmachine: trying to list again with source=arp
	I1220 02:14:06.353319   38979 main.go:144] libmachine: unable to find current IP address of domain false-503505 in network mk-false-503505 (interfaces detected: [])
	I1220 02:14:06.353358   38979 retry.go:31] will retry after 3.225841356s: waiting for domain to come up
	I1220 02:14:08.131099   37878 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1220 02:14:08.631429   37878 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1220 02:14:09.131400   37878 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1220 02:14:09.631470   37878 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1220 02:14:10.131821   37878 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1220 02:14:10.631264   37878 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1220 02:14:10.765536   37878 kubeadm.go:1114] duration metric: took 4.368721457s to wait for elevateKubeSystemPrivileges
	I1220 02:14:10.765599   37878 kubeadm.go:403] duration metric: took 18.502801612s to StartCluster
	I1220 02:14:10.765625   37878 settings.go:142] acquiring lock: {Name:mk57472848b32b0320e862b3ad8a64076ed3d76e Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1220 02:14:10.765731   37878 settings.go:150] Updating kubeconfig:  /home/minitest/minikube-integration/7cd9f41b7421760cf1f1eaa8725bdb975037b06d-7160/kubeconfig
	I1220 02:14:10.767410   37878 lock.go:35] WriteFile acquiring /home/minitest/minikube-integration/7cd9f41b7421760cf1f1eaa8725bdb975037b06d-7160/kubeconfig: {Name:mk7e6532318eb55e3c1811a528040bd41c46d8c7 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1220 02:14:10.767716   37878 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.3/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml"
	I1220 02:14:10.767786   37878 addons.go:527] enable addons start: toEnable=map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubetail:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I1220 02:14:10.767867   37878 addons.go:70] Setting storage-provisioner=true in profile "custom-flannel-503505"
	I1220 02:14:10.767885   37878 addons.go:239] Setting addon storage-provisioner=true in "custom-flannel-503505"
	I1220 02:14:10.767747   37878 start.go:236] Will wait 15m0s for node &{Name: IP:192.168.72.110 Port:8443 KubernetesVersion:v1.34.3 ContainerRuntime:docker ControlPlane:true Worker:true}
	I1220 02:14:10.767912   37878 host.go:66] Checking if "custom-flannel-503505" exists ...
	I1220 02:14:10.767936   37878 config.go:182] Loaded profile config "custom-flannel-503505": Driver=kvm2, ContainerRuntime=docker, KubernetesVersion=v1.34.3
	I1220 02:14:10.767992   37878 addons.go:70] Setting default-storageclass=true in profile "custom-flannel-503505"
	I1220 02:14:10.768006   37878 addons_storage_classes.go:34] enableOrDisableStorageClasses default-storageclass=true on "custom-flannel-503505"
	I1220 02:14:10.769347   37878 out.go:179] * Verifying Kubernetes components...
	I1220 02:14:10.770891   37878 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1220 02:14:10.772643   37878 addons.go:239] Setting addon default-storageclass=true in "custom-flannel-503505"
	I1220 02:14:10.772686   37878 host.go:66] Checking if "custom-flannel-503505" exists ...
	I1220 02:14:10.772827   37878 out.go:179]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I1220 02:14:10.774271   37878 addons.go:436] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I1220 02:14:10.774291   37878 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I1220 02:14:10.775118   37878 addons.go:436] installing /etc/kubernetes/addons/storageclass.yaml
	I1220 02:14:10.775173   37878 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I1220 02:14:10.778715   37878 main.go:144] libmachine: domain custom-flannel-503505 has defined MAC address 52:54:00:31:8f:50 in network mk-custom-flannel-503505
	I1220 02:14:10.779148   37878 main.go:144] libmachine: domain custom-flannel-503505 has defined MAC address 52:54:00:31:8f:50 in network mk-custom-flannel-503505
	I1220 02:14:10.779240   37878 main.go:144] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:31:8f:50", ip: ""} in network mk-custom-flannel-503505: {Iface:virbr4 ExpiryTime:2025-12-20 03:13:37 +0000 UTC Type:0 Mac:52:54:00:31:8f:50 Iaid: IPaddr:192.168.72.110 Prefix:24 Hostname:custom-flannel-503505 Clientid:01:52:54:00:31:8f:50}
	I1220 02:14:10.779272   37878 main.go:144] libmachine: domain custom-flannel-503505 has defined IP address 192.168.72.110 and MAC address 52:54:00:31:8f:50 in network mk-custom-flannel-503505
	I1220 02:14:10.779776   37878 sshutil.go:53] new ssh client: &{IP:192.168.72.110 Port:22 SSHKeyPath:/home/minitest/minikube-integration/7cd9f41b7421760cf1f1eaa8725bdb975037b06d-7160/.minikube/machines/custom-flannel-503505/id_rsa Username:docker}
	I1220 02:14:10.780325   37878 main.go:144] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:31:8f:50", ip: ""} in network mk-custom-flannel-503505: {Iface:virbr4 ExpiryTime:2025-12-20 03:13:37 +0000 UTC Type:0 Mac:52:54:00:31:8f:50 Iaid: IPaddr:192.168.72.110 Prefix:24 Hostname:custom-flannel-503505 Clientid:01:52:54:00:31:8f:50}
	I1220 02:14:10.780367   37878 main.go:144] libmachine: domain custom-flannel-503505 has defined IP address 192.168.72.110 and MAC address 52:54:00:31:8f:50 in network mk-custom-flannel-503505
	I1220 02:14:10.780605   37878 sshutil.go:53] new ssh client: &{IP:192.168.72.110 Port:22 SSHKeyPath:/home/minitest/minikube-integration/7cd9f41b7421760cf1f1eaa8725bdb975037b06d-7160/.minikube/machines/custom-flannel-503505/id_rsa Username:docker}
	I1220 02:14:11.077940   37878 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.3/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.72.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.34.3/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -"
	I1220 02:14:11.193874   37878 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1220 02:14:11.505786   37878 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.3/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1220 02:14:11.514993   37878 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.3/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I1220 02:14:11.665088   37878 start.go:977] {"host.minikube.internal": 192.168.72.1} host record injected into CoreDNS's ConfigMap
	I1220 02:14:11.666520   37878 node_ready.go:35] waiting up to 15m0s for node "custom-flannel-503505" to be "Ready" ...
	I1220 02:14:12.188508   37878 kapi.go:214] "coredns" deployment in "kube-system" namespace and "custom-flannel-503505" context rescaled to 1 replicas
	I1220 02:14:12.198043   37878 out.go:179] * Enabled addons: storage-provisioner, default-storageclass
	I1220 02:14:08.191550   37762 system_pods.go:86] 9 kube-system pods found
	I1220 02:14:08.191589   37762 system_pods.go:89] "calico-kube-controllers-5c676f698c-5plhl" [8be42fe3-d58c-4bbf-9c51-8b689e28f671] Pending / Ready:ContainersNotReady (containers with unready status: [calico-kube-controllers]) / ContainersReady:ContainersNotReady (containers with unready status: [calico-kube-controllers])
	I1220 02:14:08.191605   37762 system_pods.go:89] "calico-node-d2xvk" [ebf1dd8e-ce6b-4cf7-a858-755e70edd320] Pending / Initialized:ContainersNotInitialized (containers with incomplete status: [ebpf-bootstrap]) / Ready:ContainersNotReady (containers with unready status: [calico-node]) / ContainersReady:ContainersNotReady (containers with unready status: [calico-node])
	I1220 02:14:08.191621   37762 system_pods.go:89] "coredns-66bc5c9577-hd2kg" [6f327eef-5be3-4358-8bf0-be3e0e9a13f1] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1220 02:14:08.191627   37762 system_pods.go:89] "etcd-calico-503505" [7fed5e30-e972-43f8-9ef7-7261b7216d7c] Running
	I1220 02:14:08.191633   37762 system_pods.go:89] "kube-apiserver-calico-503505" [b81cb7dd-efa9-428d-b155-4c8d4fcb5566] Running
	I1220 02:14:08.191639   37762 system_pods.go:89] "kube-controller-manager-calico-503505" [4a7a0d88-4103-46e7-8090-72b0e9d91c39] Running
	I1220 02:14:08.191645   37762 system_pods.go:89] "kube-proxy-gzr82" [6a0b0ea5-98c0-4762-9423-1c10dee4576e] Running
	I1220 02:14:08.191652   37762 system_pods.go:89] "kube-scheduler-calico-503505" [ecad390b-bd72-4f11-82fd-6544060a23c7] Running
	I1220 02:14:08.191661   37762 system_pods.go:89] "storage-provisioner" [742d00f3-5d72-488a-afa1-1fcd40398cf6] Running
	I1220 02:14:08.191680   37762 retry.go:31] will retry after 2.235844761s: missing components: kube-dns
	I1220 02:14:10.441962   37762 system_pods.go:86] 9 kube-system pods found
	I1220 02:14:10.442003   37762 system_pods.go:89] "calico-kube-controllers-5c676f698c-5plhl" [8be42fe3-d58c-4bbf-9c51-8b689e28f671] Pending / Ready:ContainersNotReady (containers with unready status: [calico-kube-controllers]) / ContainersReady:ContainersNotReady (containers with unready status: [calico-kube-controllers])
	I1220 02:14:10.442017   37762 system_pods.go:89] "calico-node-d2xvk" [ebf1dd8e-ce6b-4cf7-a858-755e70edd320] Pending / Initialized:ContainersNotInitialized (containers with incomplete status: [ebpf-bootstrap]) / Ready:ContainersNotReady (containers with unready status: [calico-node]) / ContainersReady:ContainersNotReady (containers with unready status: [calico-node])
	I1220 02:14:10.442028   37762 system_pods.go:89] "coredns-66bc5c9577-hd2kg" [6f327eef-5be3-4358-8bf0-be3e0e9a13f1] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1220 02:14:10.442035   37762 system_pods.go:89] "etcd-calico-503505" [7fed5e30-e972-43f8-9ef7-7261b7216d7c] Running
	I1220 02:14:10.442041   37762 system_pods.go:89] "kube-apiserver-calico-503505" [b81cb7dd-efa9-428d-b155-4c8d4fcb5566] Running
	I1220 02:14:10.442048   37762 system_pods.go:89] "kube-controller-manager-calico-503505" [4a7a0d88-4103-46e7-8090-72b0e9d91c39] Running
	I1220 02:14:10.442053   37762 system_pods.go:89] "kube-proxy-gzr82" [6a0b0ea5-98c0-4762-9423-1c10dee4576e] Running
	I1220 02:14:10.442059   37762 system_pods.go:89] "kube-scheduler-calico-503505" [ecad390b-bd72-4f11-82fd-6544060a23c7] Running
	I1220 02:14:10.442063   37762 system_pods.go:89] "storage-provisioner" [742d00f3-5d72-488a-afa1-1fcd40398cf6] Running
	I1220 02:14:10.442080   37762 retry.go:31] will retry after 3.072193082s: missing components: kube-dns
	I1220 02:14:12.199503   37878 addons.go:530] duration metric: took 1.431726471s for enable addons: enabled=[storage-provisioner default-storageclass]
	I1220 02:14:09.580950   38979 main.go:144] libmachine: domain false-503505 has defined MAC address 52:54:00:4e:1e:41 in network mk-false-503505
	I1220 02:14:09.581833   38979 main.go:144] libmachine: no network interface addresses found for domain false-503505 (source=lease)
	I1220 02:14:09.581857   38979 main.go:144] libmachine: trying to list again with source=arp
	I1220 02:14:09.582327   38979 main.go:144] libmachine: unable to find current IP address of domain false-503505 in network mk-false-503505 (interfaces detected: [])
	I1220 02:14:09.582367   38979 retry.go:31] will retry after 3.32332613s: waiting for domain to come up
	I1220 02:14:12.910036   38979 main.go:144] libmachine: domain false-503505 has defined MAC address 52:54:00:4e:1e:41 in network mk-false-503505
	I1220 02:14:12.911080   38979 main.go:144] libmachine: domain false-503505 has current primary IP address 192.168.61.177 and MAC address 52:54:00:4e:1e:41 in network mk-false-503505
	I1220 02:14:12.911099   38979 main.go:144] libmachine: found domain IP: 192.168.61.177
	I1220 02:14:12.911107   38979 main.go:144] libmachine: reserving static IP address...
	I1220 02:14:12.911656   38979 main.go:144] libmachine: unable to find host DHCP lease matching {name: "false-503505", mac: "52:54:00:4e:1e:41", ip: "192.168.61.177"} in network mk-false-503505
	I1220 02:14:13.162890   38979 main.go:144] libmachine: reserved static IP address 192.168.61.177 for domain false-503505
	I1220 02:14:13.162914   38979 main.go:144] libmachine: waiting for SSH...
	I1220 02:14:13.162921   38979 main.go:144] libmachine: Getting to WaitForSSH function...
	I1220 02:14:13.166240   38979 main.go:144] libmachine: domain false-503505 has defined MAC address 52:54:00:4e:1e:41 in network mk-false-503505
	I1220 02:14:13.166798   38979 main.go:144] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:4e:1e:41", ip: ""} in network mk-false-503505: {Iface:virbr3 ExpiryTime:2025-12-20 03:14:09 +0000 UTC Type:0 Mac:52:54:00:4e:1e:41 Iaid: IPaddr:192.168.61.177 Prefix:24 Hostname:minikube Clientid:01:52:54:00:4e:1e:41}
	I1220 02:14:13.166839   38979 main.go:144] libmachine: domain false-503505 has defined IP address 192.168.61.177 and MAC address 52:54:00:4e:1e:41 in network mk-false-503505
	I1220 02:14:13.167111   38979 main.go:144] libmachine: Using SSH client type: native
	I1220 02:14:13.167442   38979 main.go:144] libmachine: &{{{<nil> 0 [] [] []} docker [0x84dd20] 0x8509c0 <nil>  [] 0s} 192.168.61.177 22 <nil> <nil>}
	I1220 02:14:13.167462   38979 main.go:144] libmachine: About to run SSH command:
	exit 0
	I1220 02:14:13.287553   38979 main.go:144] libmachine: SSH cmd err, output: <nil>: 
	I1220 02:14:13.288033   38979 main.go:144] libmachine: domain creation complete
	I1220 02:14:13.289768   38979 machine.go:94] provisionDockerMachine start ...
	I1220 02:14:13.292967   38979 main.go:144] libmachine: domain false-503505 has defined MAC address 52:54:00:4e:1e:41 in network mk-false-503505
	I1220 02:14:13.293534   38979 main.go:144] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:4e:1e:41", ip: ""} in network mk-false-503505: {Iface:virbr3 ExpiryTime:2025-12-20 03:14:09 +0000 UTC Type:0 Mac:52:54:00:4e:1e:41 Iaid: IPaddr:192.168.61.177 Prefix:24 Hostname:false-503505 Clientid:01:52:54:00:4e:1e:41}
	I1220 02:14:13.293566   38979 main.go:144] libmachine: domain false-503505 has defined IP address 192.168.61.177 and MAC address 52:54:00:4e:1e:41 in network mk-false-503505
	I1220 02:14:13.293831   38979 main.go:144] libmachine: Using SSH client type: native
	I1220 02:14:13.294091   38979 main.go:144] libmachine: &{{{<nil> 0 [] [] []} docker [0x84dd20] 0x8509c0 <nil>  [] 0s} 192.168.61.177 22 <nil> <nil>}
	I1220 02:14:13.294106   38979 main.go:144] libmachine: About to run SSH command:
	hostname
	I1220 02:14:13.408900   38979 main.go:144] libmachine: SSH cmd err, output: <nil>: minikube
	
	I1220 02:14:13.408931   38979 buildroot.go:166] provisioning hostname "false-503505"
	I1220 02:14:13.412183   38979 main.go:144] libmachine: domain false-503505 has defined MAC address 52:54:00:4e:1e:41 in network mk-false-503505
	I1220 02:14:13.412723   38979 main.go:144] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:4e:1e:41", ip: ""} in network mk-false-503505: {Iface:virbr3 ExpiryTime:2025-12-20 03:14:09 +0000 UTC Type:0 Mac:52:54:00:4e:1e:41 Iaid: IPaddr:192.168.61.177 Prefix:24 Hostname:false-503505 Clientid:01:52:54:00:4e:1e:41}
	I1220 02:14:13.412747   38979 main.go:144] libmachine: domain false-503505 has defined IP address 192.168.61.177 and MAC address 52:54:00:4e:1e:41 in network mk-false-503505
	I1220 02:14:13.412990   38979 main.go:144] libmachine: Using SSH client type: native
	I1220 02:14:13.413194   38979 main.go:144] libmachine: &{{{<nil> 0 [] [] []} docker [0x84dd20] 0x8509c0 <nil>  [] 0s} 192.168.61.177 22 <nil> <nil>}
	I1220 02:14:13.413235   38979 main.go:144] libmachine: About to run SSH command:
	sudo hostname false-503505 && echo "false-503505" | sudo tee /etc/hostname
	I1220 02:14:13.545519   38979 main.go:144] libmachine: SSH cmd err, output: <nil>: false-503505
	
	I1220 02:14:13.548500   38979 main.go:144] libmachine: domain false-503505 has defined MAC address 52:54:00:4e:1e:41 in network mk-false-503505
	I1220 02:14:13.548973   38979 main.go:144] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:4e:1e:41", ip: ""} in network mk-false-503505: {Iface:virbr3 ExpiryTime:2025-12-20 03:14:09 +0000 UTC Type:0 Mac:52:54:00:4e:1e:41 Iaid: IPaddr:192.168.61.177 Prefix:24 Hostname:false-503505 Clientid:01:52:54:00:4e:1e:41}
	I1220 02:14:13.549006   38979 main.go:144] libmachine: domain false-503505 has defined IP address 192.168.61.177 and MAC address 52:54:00:4e:1e:41 in network mk-false-503505
	I1220 02:14:13.549225   38979 main.go:144] libmachine: Using SSH client type: native
	I1220 02:14:13.549497   38979 main.go:144] libmachine: &{{{<nil> 0 [] [] []} docker [0x84dd20] 0x8509c0 <nil>  [] 0s} 192.168.61.177 22 <nil> <nil>}
	I1220 02:14:13.549521   38979 main.go:144] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sfalse-503505' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 false-503505/g' /etc/hosts;
				else 
					echo '127.0.1.1 false-503505' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1220 02:14:13.522551   37762 system_pods.go:86] 9 kube-system pods found
	I1220 02:14:13.522594   37762 system_pods.go:89] "calico-kube-controllers-5c676f698c-5plhl" [8be42fe3-d58c-4bbf-9c51-8b689e28f671] Pending / Ready:ContainersNotReady (containers with unready status: [calico-kube-controllers]) / ContainersReady:ContainersNotReady (containers with unready status: [calico-kube-controllers])
	I1220 02:14:13.522608   37762 system_pods.go:89] "calico-node-d2xvk" [ebf1dd8e-ce6b-4cf7-a858-755e70edd320] Pending / Ready:ContainersNotReady (containers with unready status: [calico-node]) / ContainersReady:ContainersNotReady (containers with unready status: [calico-node])
	I1220 02:14:13.522618   37762 system_pods.go:89] "coredns-66bc5c9577-hd2kg" [6f327eef-5be3-4358-8bf0-be3e0e9a13f1] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1220 02:14:13.522624   37762 system_pods.go:89] "etcd-calico-503505" [7fed5e30-e972-43f8-9ef7-7261b7216d7c] Running
	I1220 02:14:13.522630   37762 system_pods.go:89] "kube-apiserver-calico-503505" [b81cb7dd-efa9-428d-b155-4c8d4fcb5566] Running
	I1220 02:14:13.522633   37762 system_pods.go:89] "kube-controller-manager-calico-503505" [4a7a0d88-4103-46e7-8090-72b0e9d91c39] Running
	I1220 02:14:13.522638   37762 system_pods.go:89] "kube-proxy-gzr82" [6a0b0ea5-98c0-4762-9423-1c10dee4576e] Running
	I1220 02:14:13.522643   37762 system_pods.go:89] "kube-scheduler-calico-503505" [ecad390b-bd72-4f11-82fd-6544060a23c7] Running
	I1220 02:14:13.522648   37762 system_pods.go:89] "storage-provisioner" [742d00f3-5d72-488a-afa1-1fcd40398cf6] Running
	I1220 02:14:13.522671   37762 retry.go:31] will retry after 2.893940025s: missing components: kube-dns
	I1220 02:14:16.427761   37762 system_pods.go:86] 9 kube-system pods found
	I1220 02:14:16.427804   37762 system_pods.go:89] "calico-kube-controllers-5c676f698c-5plhl" [8be42fe3-d58c-4bbf-9c51-8b689e28f671] Pending / Ready:ContainersNotReady (containers with unready status: [calico-kube-controllers]) / ContainersReady:ContainersNotReady (containers with unready status: [calico-kube-controllers])
	I1220 02:14:16.427822   37762 system_pods.go:89] "calico-node-d2xvk" [ebf1dd8e-ce6b-4cf7-a858-755e70edd320] Running / Ready:ContainersNotReady (containers with unready status: [calico-node]) / ContainersReady:ContainersNotReady (containers with unready status: [calico-node])
	I1220 02:14:16.427834   37762 system_pods.go:89] "coredns-66bc5c9577-hd2kg" [6f327eef-5be3-4358-8bf0-be3e0e9a13f1] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1220 02:14:16.427841   37762 system_pods.go:89] "etcd-calico-503505" [7fed5e30-e972-43f8-9ef7-7261b7216d7c] Running
	I1220 02:14:16.427847   37762 system_pods.go:89] "kube-apiserver-calico-503505" [b81cb7dd-efa9-428d-b155-4c8d4fcb5566] Running
	I1220 02:14:16.427857   37762 system_pods.go:89] "kube-controller-manager-calico-503505" [4a7a0d88-4103-46e7-8090-72b0e9d91c39] Running
	I1220 02:14:16.427863   37762 system_pods.go:89] "kube-proxy-gzr82" [6a0b0ea5-98c0-4762-9423-1c10dee4576e] Running
	I1220 02:14:16.427876   37762 system_pods.go:89] "kube-scheduler-calico-503505" [ecad390b-bd72-4f11-82fd-6544060a23c7] Running
	I1220 02:14:16.427881   37762 system_pods.go:89] "storage-provisioner" [742d00f3-5d72-488a-afa1-1fcd40398cf6] Running
	I1220 02:14:16.427898   37762 retry.go:31] will retry after 5.028189083s: missing components: kube-dns
	W1220 02:14:13.671217   37878 node_ready.go:57] node "custom-flannel-503505" has "Ready":"False" status (will retry)
	W1220 02:14:16.172759   37878 node_ready.go:57] node "custom-flannel-503505" has "Ready":"False" status (will retry)
	I1220 02:14:13.683279   38979 main.go:144] libmachine: SSH cmd err, output: <nil>: 
	I1220 02:14:13.683320   38979 buildroot.go:172] set auth options {CertDir:/home/minitest/minikube-integration/7cd9f41b7421760cf1f1eaa8725bdb975037b06d-7160/.minikube CaCertPath:/home/minitest/minikube-integration/7cd9f41b7421760cf1f1eaa8725bdb975037b06d-7160/.minikube/certs/ca.pem CaPrivateKeyPath:/home/minitest/minikube-integration/7cd9f41b7421760cf1f1eaa8725bdb975037b06d-7160/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/minitest/minikube-integration/7cd9f41b7421760cf1f1eaa8725bdb975037b06d-7160/.minikube/machines/server.pem ServerKeyPath:/home/minitest/minikube-integration/7cd9f41b7421760cf1f1eaa8725bdb975037b06d-7160/.minikube/machines/server-key.pem ClientKeyPath:/home/minitest/minikube-integration/7cd9f41b7421760cf1f1eaa8725bdb975037b06d-7160/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/minitest/minikube-integration/7cd9f41b7421760cf1f1eaa8725bdb975037b06d-7160/.minik
ube/certs/cert.pem ServerCertSANs:[] StorePath:/home/minitest/minikube-integration/7cd9f41b7421760cf1f1eaa8725bdb975037b06d-7160/.minikube}
	I1220 02:14:13.683376   38979 buildroot.go:174] setting up certificates
	I1220 02:14:13.683393   38979 provision.go:84] configureAuth start
	I1220 02:14:13.687478   38979 main.go:144] libmachine: domain false-503505 has defined MAC address 52:54:00:4e:1e:41 in network mk-false-503505
	I1220 02:14:13.688091   38979 main.go:144] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:4e:1e:41", ip: ""} in network mk-false-503505: {Iface:virbr3 ExpiryTime:2025-12-20 03:14:09 +0000 UTC Type:0 Mac:52:54:00:4e:1e:41 Iaid: IPaddr:192.168.61.177 Prefix:24 Hostname:false-503505 Clientid:01:52:54:00:4e:1e:41}
	I1220 02:14:13.688126   38979 main.go:144] libmachine: domain false-503505 has defined IP address 192.168.61.177 and MAC address 52:54:00:4e:1e:41 in network mk-false-503505
	I1220 02:14:13.691975   38979 main.go:144] libmachine: domain false-503505 has defined MAC address 52:54:00:4e:1e:41 in network mk-false-503505
	I1220 02:14:13.692656   38979 main.go:144] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:4e:1e:41", ip: ""} in network mk-false-503505: {Iface:virbr3 ExpiryTime:2025-12-20 03:14:09 +0000 UTC Type:0 Mac:52:54:00:4e:1e:41 Iaid: IPaddr:192.168.61.177 Prefix:24 Hostname:false-503505 Clientid:01:52:54:00:4e:1e:41}
	I1220 02:14:13.692715   38979 main.go:144] libmachine: domain false-503505 has defined IP address 192.168.61.177 and MAC address 52:54:00:4e:1e:41 in network mk-false-503505
	I1220 02:14:13.692969   38979 provision.go:143] copyHostCerts
	I1220 02:14:13.693049   38979 exec_runner.go:144] found /home/minitest/minikube-integration/7cd9f41b7421760cf1f1eaa8725bdb975037b06d-7160/.minikube/ca.pem, removing ...
	I1220 02:14:13.693064   38979 exec_runner.go:203] rm: /home/minitest/minikube-integration/7cd9f41b7421760cf1f1eaa8725bdb975037b06d-7160/.minikube/ca.pem
	I1220 02:14:13.693154   38979 exec_runner.go:151] cp: /home/minitest/minikube-integration/7cd9f41b7421760cf1f1eaa8725bdb975037b06d-7160/.minikube/certs/ca.pem --> /home/minitest/minikube-integration/7cd9f41b7421760cf1f1eaa8725bdb975037b06d-7160/.minikube/ca.pem (1082 bytes)
	I1220 02:14:13.693360   38979 exec_runner.go:144] found /home/minitest/minikube-integration/7cd9f41b7421760cf1f1eaa8725bdb975037b06d-7160/.minikube/cert.pem, removing ...
	I1220 02:14:13.693377   38979 exec_runner.go:203] rm: /home/minitest/minikube-integration/7cd9f41b7421760cf1f1eaa8725bdb975037b06d-7160/.minikube/cert.pem
	I1220 02:14:13.693441   38979 exec_runner.go:151] cp: /home/minitest/minikube-integration/7cd9f41b7421760cf1f1eaa8725bdb975037b06d-7160/.minikube/certs/cert.pem --> /home/minitest/minikube-integration/7cd9f41b7421760cf1f1eaa8725bdb975037b06d-7160/.minikube/cert.pem (1127 bytes)
	I1220 02:14:13.693548   38979 exec_runner.go:144] found /home/minitest/minikube-integration/7cd9f41b7421760cf1f1eaa8725bdb975037b06d-7160/.minikube/key.pem, removing ...
	I1220 02:14:13.693560   38979 exec_runner.go:203] rm: /home/minitest/minikube-integration/7cd9f41b7421760cf1f1eaa8725bdb975037b06d-7160/.minikube/key.pem
	I1220 02:14:13.693612   38979 exec_runner.go:151] cp: /home/minitest/minikube-integration/7cd9f41b7421760cf1f1eaa8725bdb975037b06d-7160/.minikube/certs/key.pem --> /home/minitest/minikube-integration/7cd9f41b7421760cf1f1eaa8725bdb975037b06d-7160/.minikube/key.pem (1675 bytes)
	I1220 02:14:13.693705   38979 provision.go:117] generating server cert: /home/minitest/minikube-integration/7cd9f41b7421760cf1f1eaa8725bdb975037b06d-7160/.minikube/machines/server.pem ca-key=/home/minitest/minikube-integration/7cd9f41b7421760cf1f1eaa8725bdb975037b06d-7160/.minikube/certs/ca.pem private-key=/home/minitest/minikube-integration/7cd9f41b7421760cf1f1eaa8725bdb975037b06d-7160/.minikube/certs/ca-key.pem org=minitest.false-503505 san=[127.0.0.1 192.168.61.177 false-503505 localhost minikube]
	I1220 02:14:13.709086   38979 provision.go:177] copyRemoteCerts
	I1220 02:14:13.709144   38979 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1220 02:14:13.713124   38979 main.go:144] libmachine: domain false-503505 has defined MAC address 52:54:00:4e:1e:41 in network mk-false-503505
	I1220 02:14:13.713703   38979 main.go:144] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:4e:1e:41", ip: ""} in network mk-false-503505: {Iface:virbr3 ExpiryTime:2025-12-20 03:14:09 +0000 UTC Type:0 Mac:52:54:00:4e:1e:41 Iaid: IPaddr:192.168.61.177 Prefix:24 Hostname:false-503505 Clientid:01:52:54:00:4e:1e:41}
	I1220 02:14:13.713755   38979 main.go:144] libmachine: domain false-503505 has defined IP address 192.168.61.177 and MAC address 52:54:00:4e:1e:41 in network mk-false-503505
	I1220 02:14:13.713967   38979 sshutil.go:53] new ssh client: &{IP:192.168.61.177 Port:22 SSHKeyPath:/home/minitest/minikube-integration/7cd9f41b7421760cf1f1eaa8725bdb975037b06d-7160/.minikube/machines/false-503505/id_rsa Username:docker}
	I1220 02:14:13.809584   38979 ssh_runner.go:362] scp /home/minitest/minikube-integration/7cd9f41b7421760cf1f1eaa8725bdb975037b06d-7160/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I1220 02:14:13.845246   38979 ssh_runner.go:362] scp /home/minitest/minikube-integration/7cd9f41b7421760cf1f1eaa8725bdb975037b06d-7160/.minikube/machines/server.pem --> /etc/docker/server.pem (1208 bytes)
	I1220 02:14:13.881465   38979 ssh_runner.go:362] scp /home/minitest/minikube-integration/7cd9f41b7421760cf1f1eaa8725bdb975037b06d-7160/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I1220 02:14:13.915284   38979 provision.go:87] duration metric: took 231.876161ms to configureAuth
	I1220 02:14:13.915334   38979 buildroot.go:189] setting minikube options for container-runtime
	I1220 02:14:13.915608   38979 config.go:182] Loaded profile config "false-503505": Driver=kvm2, ContainerRuntime=docker, KubernetesVersion=v1.34.3
	I1220 02:14:13.919150   38979 main.go:144] libmachine: domain false-503505 has defined MAC address 52:54:00:4e:1e:41 in network mk-false-503505
	I1220 02:14:13.919807   38979 main.go:144] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:4e:1e:41", ip: ""} in network mk-false-503505: {Iface:virbr3 ExpiryTime:2025-12-20 03:14:09 +0000 UTC Type:0 Mac:52:54:00:4e:1e:41 Iaid: IPaddr:192.168.61.177 Prefix:24 Hostname:false-503505 Clientid:01:52:54:00:4e:1e:41}
	I1220 02:14:13.919851   38979 main.go:144] libmachine: domain false-503505 has defined IP address 192.168.61.177 and MAC address 52:54:00:4e:1e:41 in network mk-false-503505
	I1220 02:14:13.920156   38979 main.go:144] libmachine: Using SSH client type: native
	I1220 02:14:13.920492   38979 main.go:144] libmachine: &{{{<nil> 0 [] [] []} docker [0x84dd20] 0x8509c0 <nil>  [] 0s} 192.168.61.177 22 <nil> <nil>}
	I1220 02:14:13.920559   38979 main.go:144] libmachine: About to run SSH command:
	df --output=fstype / | tail -n 1
	I1220 02:14:14.043505   38979 main.go:144] libmachine: SSH cmd err, output: <nil>: tmpfs
	
	I1220 02:14:14.043553   38979 buildroot.go:70] root file system type: tmpfs
	I1220 02:14:14.043717   38979 provision.go:314] Updating docker unit: /lib/systemd/system/docker.service ...
	I1220 02:14:14.047676   38979 main.go:144] libmachine: domain false-503505 has defined MAC address 52:54:00:4e:1e:41 in network mk-false-503505
	I1220 02:14:14.048130   38979 main.go:144] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:4e:1e:41", ip: ""} in network mk-false-503505: {Iface:virbr3 ExpiryTime:2025-12-20 03:14:09 +0000 UTC Type:0 Mac:52:54:00:4e:1e:41 Iaid: IPaddr:192.168.61.177 Prefix:24 Hostname:false-503505 Clientid:01:52:54:00:4e:1e:41}
	I1220 02:14:14.048163   38979 main.go:144] libmachine: domain false-503505 has defined IP address 192.168.61.177 and MAC address 52:54:00:4e:1e:41 in network mk-false-503505
	I1220 02:14:14.048457   38979 main.go:144] libmachine: Using SSH client type: native
	I1220 02:14:14.048704   38979 main.go:144] libmachine: &{{{<nil> 0 [] [] []} docker [0x84dd20] 0x8509c0 <nil>  [] 0s} 192.168.61.177 22 <nil> <nil>}
	I1220 02:14:14.048784   38979 main.go:144] libmachine: About to run SSH command:
	sudo mkdir -p /lib/systemd/system && printf %s "[Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	After=network-online.target minikube-automount.service nss-lookup.target docker.socket firewalld.service containerd.service time-set.target
	Wants=network-online.target containerd.service
	Requires=docker.socket
	StartLimitBurst=3
	StartLimitIntervalSec=60
	[Service]
	Type=notify
	Restart=always
	
	
	
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 \
		-H fd:// --containerd=/run/containerd/containerd.sock \
		-H unix:///var/run/docker.sock \
		--default-ulimit=nofile=1048576:1048576 \
		--tlsverify \
		--tlscacert /etc/docker/ca.pem \
		--tlscert /etc/docker/server.pem \
		--tlskey /etc/docker/server-key.pem --label provider=kvm2 --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP \$MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	OOMScoreAdjust=-500
	
	[Install]
	WantedBy=multi-user.target
	" | sudo tee /lib/systemd/system/docker.service.new
	I1220 02:14:14.192756   38979 main.go:144] libmachine: SSH cmd err, output: <nil>: [Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	After=network-online.target minikube-automount.service nss-lookup.target docker.socket firewalld.service containerd.service time-set.target
	Wants=network-online.target containerd.service
	Requires=docker.socket
	StartLimitBurst=3
	StartLimitIntervalSec=60
	[Service]
	Type=notify
	Restart=always
	
	
	
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 	-H fd:// --containerd=/run/containerd/containerd.sock 	-H unix:///var/run/docker.sock 	--default-ulimit=nofile=1048576:1048576 	--tlsverify 	--tlscacert /etc/docker/ca.pem 	--tlscert /etc/docker/server.pem 	--tlskey /etc/docker/server-key.pem --label provider=kvm2 --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP $MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	OOMScoreAdjust=-500
	
	[Install]
	WantedBy=multi-user.target
	
	I1220 02:14:14.196528   38979 main.go:144] libmachine: domain false-503505 has defined MAC address 52:54:00:4e:1e:41 in network mk-false-503505
	I1220 02:14:14.197071   38979 main.go:144] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:4e:1e:41", ip: ""} in network mk-false-503505: {Iface:virbr3 ExpiryTime:2025-12-20 03:14:09 +0000 UTC Type:0 Mac:52:54:00:4e:1e:41 Iaid: IPaddr:192.168.61.177 Prefix:24 Hostname:false-503505 Clientid:01:52:54:00:4e:1e:41}
	I1220 02:14:14.197103   38979 main.go:144] libmachine: domain false-503505 has defined IP address 192.168.61.177 and MAC address 52:54:00:4e:1e:41 in network mk-false-503505
	I1220 02:14:14.197379   38979 main.go:144] libmachine: Using SSH client type: native
	I1220 02:14:14.197658   38979 main.go:144] libmachine: &{{{<nil> 0 [] [] []} docker [0x84dd20] 0x8509c0 <nil>  [] 0s} 192.168.61.177 22 <nil> <nil>}
	I1220 02:14:14.197687   38979 main.go:144] libmachine: About to run SSH command:
	sudo diff -u /lib/systemd/system/docker.service /lib/systemd/system/docker.service.new || { sudo mv /lib/systemd/system/docker.service.new /lib/systemd/system/docker.service; sudo systemctl -f daemon-reload && sudo systemctl -f enable docker && sudo systemctl -f restart docker; }
	I1220 02:14:15.322369   38979 main.go:144] libmachine: SSH cmd err, output: <nil>: diff: can't stat '/lib/systemd/system/docker.service': No such file or directory
	Created symlink '/etc/systemd/system/multi-user.target.wants/docker.service' → '/usr/lib/systemd/system/docker.service'.
	
	I1220 02:14:15.322395   38979 machine.go:97] duration metric: took 2.032605943s to provisionDockerMachine
	I1220 02:14:15.322407   38979 client.go:176] duration metric: took 21.59897051s to LocalClient.Create
	I1220 02:14:15.322422   38979 start.go:167] duration metric: took 21.599041943s to libmachine.API.Create "false-503505"
	I1220 02:14:15.322430   38979 start.go:293] postStartSetup for "false-503505" (driver="kvm2")
	I1220 02:14:15.322443   38979 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1220 02:14:15.322513   38979 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1220 02:14:15.325726   38979 main.go:144] libmachine: domain false-503505 has defined MAC address 52:54:00:4e:1e:41 in network mk-false-503505
	I1220 02:14:15.326187   38979 main.go:144] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:4e:1e:41", ip: ""} in network mk-false-503505: {Iface:virbr3 ExpiryTime:2025-12-20 03:14:09 +0000 UTC Type:0 Mac:52:54:00:4e:1e:41 Iaid: IPaddr:192.168.61.177 Prefix:24 Hostname:false-503505 Clientid:01:52:54:00:4e:1e:41}
	I1220 02:14:15.326227   38979 main.go:144] libmachine: domain false-503505 has defined IP address 192.168.61.177 and MAC address 52:54:00:4e:1e:41 in network mk-false-503505
	I1220 02:14:15.326423   38979 sshutil.go:53] new ssh client: &{IP:192.168.61.177 Port:22 SSHKeyPath:/home/minitest/minikube-integration/7cd9f41b7421760cf1f1eaa8725bdb975037b06d-7160/.minikube/machines/false-503505/id_rsa Username:docker}
	I1220 02:14:15.421695   38979 ssh_runner.go:195] Run: cat /etc/os-release
	I1220 02:14:15.426952   38979 info.go:137] Remote host: Buildroot 2025.02
	I1220 02:14:15.426987   38979 filesync.go:126] Scanning /home/minitest/minikube-integration/7cd9f41b7421760cf1f1eaa8725bdb975037b06d-7160/.minikube/addons for local assets ...
	I1220 02:14:15.427077   38979 filesync.go:126] Scanning /home/minitest/minikube-integration/7cd9f41b7421760cf1f1eaa8725bdb975037b06d-7160/.minikube/files for local assets ...
	I1220 02:14:15.427228   38979 filesync.go:149] local asset: /home/minitest/minikube-integration/7cd9f41b7421760cf1f1eaa8725bdb975037b06d-7160/.minikube/files/etc/ssl/certs/130182.pem -> 130182.pem in /etc/ssl/certs
	I1220 02:14:15.427399   38979 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I1220 02:14:15.440683   38979 ssh_runner.go:362] scp /home/minitest/minikube-integration/7cd9f41b7421760cf1f1eaa8725bdb975037b06d-7160/.minikube/files/etc/ssl/certs/130182.pem --> /etc/ssl/certs/130182.pem (1708 bytes)
	I1220 02:14:15.472751   38979 start.go:296] duration metric: took 150.304753ms for postStartSetup
	I1220 02:14:15.476375   38979 main.go:144] libmachine: domain false-503505 has defined MAC address 52:54:00:4e:1e:41 in network mk-false-503505
	I1220 02:14:15.476839   38979 main.go:144] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:4e:1e:41", ip: ""} in network mk-false-503505: {Iface:virbr3 ExpiryTime:2025-12-20 03:14:09 +0000 UTC Type:0 Mac:52:54:00:4e:1e:41 Iaid: IPaddr:192.168.61.177 Prefix:24 Hostname:false-503505 Clientid:01:52:54:00:4e:1e:41}
	I1220 02:14:15.476864   38979 main.go:144] libmachine: domain false-503505 has defined IP address 192.168.61.177 and MAC address 52:54:00:4e:1e:41 in network mk-false-503505
	I1220 02:14:15.477147   38979 profile.go:143] Saving config to /home/minitest/minikube-integration/7cd9f41b7421760cf1f1eaa8725bdb975037b06d-7160/.minikube/profiles/false-503505/config.json ...
	I1220 02:14:15.477371   38979 start.go:128] duration metric: took 21.756169074s to createHost
	I1220 02:14:15.480134   38979 main.go:144] libmachine: domain false-503505 has defined MAC address 52:54:00:4e:1e:41 in network mk-false-503505
	I1220 02:14:15.480583   38979 main.go:144] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:4e:1e:41", ip: ""} in network mk-false-503505: {Iface:virbr3 ExpiryTime:2025-12-20 03:14:09 +0000 UTC Type:0 Mac:52:54:00:4e:1e:41 Iaid: IPaddr:192.168.61.177 Prefix:24 Hostname:false-503505 Clientid:01:52:54:00:4e:1e:41}
	I1220 02:14:15.480606   38979 main.go:144] libmachine: domain false-503505 has defined IP address 192.168.61.177 and MAC address 52:54:00:4e:1e:41 in network mk-false-503505
	I1220 02:14:15.480814   38979 main.go:144] libmachine: Using SSH client type: native
	I1220 02:14:15.481047   38979 main.go:144] libmachine: &{{{<nil> 0 [] [] []} docker [0x84dd20] 0x8509c0 <nil>  [] 0s} 192.168.61.177 22 <nil> <nil>}
	I1220 02:14:15.481060   38979 main.go:144] libmachine: About to run SSH command:
	date +%s.%N
	I1220 02:14:15.603682   38979 main.go:144] libmachine: SSH cmd err, output: <nil>: 1766196855.575822881
	
	I1220 02:14:15.603714   38979 fix.go:216] guest clock: 1766196855.575822881
	I1220 02:14:15.603726   38979 fix.go:229] Guest: 2025-12-20 02:14:15.575822881 +0000 UTC Remote: 2025-12-20 02:14:15.477389482 +0000 UTC m=+21.885083527 (delta=98.433399ms)
	I1220 02:14:15.603749   38979 fix.go:200] guest clock delta is within tolerance: 98.433399ms
	I1220 02:14:15.603770   38979 start.go:83] releasing machines lock for "false-503505", held for 21.882663608s
	I1220 02:14:15.607369   38979 main.go:144] libmachine: domain false-503505 has defined MAC address 52:54:00:4e:1e:41 in network mk-false-503505
	I1220 02:14:15.607986   38979 main.go:144] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:4e:1e:41", ip: ""} in network mk-false-503505: {Iface:virbr3 ExpiryTime:2025-12-20 03:14:09 +0000 UTC Type:0 Mac:52:54:00:4e:1e:41 Iaid: IPaddr:192.168.61.177 Prefix:24 Hostname:false-503505 Clientid:01:52:54:00:4e:1e:41}
	I1220 02:14:15.608024   38979 main.go:144] libmachine: domain false-503505 has defined IP address 192.168.61.177 and MAC address 52:54:00:4e:1e:41 in network mk-false-503505
	I1220 02:14:15.608687   38979 ssh_runner.go:195] Run: cat /version.json
	I1220 02:14:15.608792   38979 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1220 02:14:15.612782   38979 main.go:144] libmachine: domain false-503505 has defined MAC address 52:54:00:4e:1e:41 in network mk-false-503505
	I1220 02:14:15.613294   38979 main.go:144] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:4e:1e:41", ip: ""} in network mk-false-503505: {Iface:virbr3 ExpiryTime:2025-12-20 03:14:09 +0000 UTC Type:0 Mac:52:54:00:4e:1e:41 Iaid: IPaddr:192.168.61.177 Prefix:24 Hostname:false-503505 Clientid:01:52:54:00:4e:1e:41}
	I1220 02:14:15.613342   38979 main.go:144] libmachine: domain false-503505 has defined IP address 192.168.61.177 and MAC address 52:54:00:4e:1e:41 in network mk-false-503505
	I1220 02:14:15.613436   38979 main.go:144] libmachine: domain false-503505 has defined MAC address 52:54:00:4e:1e:41 in network mk-false-503505
	I1220 02:14:15.613556   38979 sshutil.go:53] new ssh client: &{IP:192.168.61.177 Port:22 SSHKeyPath:/home/minitest/minikube-integration/7cd9f41b7421760cf1f1eaa8725bdb975037b06d-7160/.minikube/machines/false-503505/id_rsa Username:docker}
	I1220 02:14:15.614074   38979 main.go:144] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:4e:1e:41", ip: ""} in network mk-false-503505: {Iface:virbr3 ExpiryTime:2025-12-20 03:14:09 +0000 UTC Type:0 Mac:52:54:00:4e:1e:41 Iaid: IPaddr:192.168.61.177 Prefix:24 Hostname:false-503505 Clientid:01:52:54:00:4e:1e:41}
	I1220 02:14:15.614107   38979 main.go:144] libmachine: domain false-503505 has defined IP address 192.168.61.177 and MAC address 52:54:00:4e:1e:41 in network mk-false-503505
	I1220 02:14:15.614392   38979 sshutil.go:53] new ssh client: &{IP:192.168.61.177 Port:22 SSHKeyPath:/home/minitest/minikube-integration/7cd9f41b7421760cf1f1eaa8725bdb975037b06d-7160/.minikube/machines/false-503505/id_rsa Username:docker}
	I1220 02:14:15.700660   38979 ssh_runner.go:195] Run: systemctl --version
	I1220 02:14:15.725011   38979 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W1220 02:14:15.731935   38979 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1220 02:14:15.732099   38979 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f -name *bridge* -not -name *podman* -not -name *.mk_disabled -printf "%p, " -exec sh -c "sudo sed -i -r -e '/"dst": ".*:.*"/d' -e 's|^(.*)"dst": (.*)[,*]$|\1"dst": \2|g' -e '/"subnet": ".*:.*"/d' -e 's|^(.*)"subnet": ".*"(.*)[,*]$|\1"subnet": "10.244.0.0/16"\2|g' {}" ;
	I1220 02:14:15.744444   38979 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f -name *podman* -not -name *.mk_disabled -printf "%p, " -exec sh -c "sudo sed -i -r -e 's|^(.*)"subnet": ".*"(.*)$|\1"subnet": "10.244.0.0/16"\2|g' -e 's|^(.*)"gateway": ".*"(.*)$|\1"gateway": "10.244.0.1"\2|g' {}" ;
	I1220 02:14:15.768292   38979 cni.go:308] configured [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I1220 02:14:15.768338   38979 start.go:496] detecting cgroup driver to use...
	I1220 02:14:15.768490   38979 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///run/containerd/containerd.sock
	" | sudo tee /etc/crictl.yaml"
	I1220 02:14:15.808234   38979 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)sandbox_image = .*$|\1sandbox_image = "registry.k8s.io/pause:3.10.1"|' /etc/containerd/config.toml"
	I1220 02:14:15.830328   38979 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)restrict_oom_score_adj = .*$|\1restrict_oom_score_adj = false|' /etc/containerd/config.toml"
	I1220 02:14:15.848439   38979 containerd.go:146] configuring containerd to use "cgroupfs" as cgroup driver...
	I1220 02:14:15.848537   38979 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)SystemdCgroup = .*$|\1SystemdCgroup = false|g' /etc/containerd/config.toml"
	I1220 02:14:15.865682   38979 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runtime.v1.linux"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I1220 02:14:15.887500   38979 ssh_runner.go:195] Run: sh -c "sudo sed -i '/systemd_cgroup/d' /etc/containerd/config.toml"
	I1220 02:14:15.906005   38979 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runc.v1"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I1220 02:14:15.925461   38979 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1220 02:14:15.940692   38979 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)conf_dir = .*$|\1conf_dir = "/etc/cni/net.d"|g' /etc/containerd/config.toml"
	I1220 02:14:15.959326   38979 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *enable_unprivileged_ports = .*/d' /etc/containerd/config.toml"
	I1220 02:14:15.978291   38979 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)\[plugins."io.containerd.grpc.v1.cri"\]|&\n\1  enable_unprivileged_ports = true|' /etc/containerd/config.toml"
	I1220 02:14:15.997878   38979 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1220 02:14:16.014027   38979 crio.go:166] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 1
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I1220 02:14:16.014121   38979 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I1220 02:14:16.033465   38979 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1220 02:14:16.050354   38979 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1220 02:14:16.231792   38979 ssh_runner.go:195] Run: sudo systemctl restart containerd
	I1220 02:14:16.289416   38979 start.go:496] detecting cgroup driver to use...
	I1220 02:14:16.289528   38979 ssh_runner.go:195] Run: sudo systemctl cat docker.service
	I1220 02:14:16.314852   38979 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1220 02:14:16.343915   38979 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I1220 02:14:16.373499   38979 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1220 02:14:16.393749   38979 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I1220 02:14:16.415218   38979 ssh_runner.go:195] Run: sudo systemctl stop -f crio
	I1220 02:14:16.448678   38979 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I1220 02:14:16.471638   38979 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/cri-dockerd.sock
	" | sudo tee /etc/crictl.yaml"
	I1220 02:14:16.499850   38979 ssh_runner.go:195] Run: which cri-dockerd
	I1220 02:14:16.505358   38979 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/cri-docker.service.d
	I1220 02:14:16.518773   38979 ssh_runner.go:362] scp memory --> /etc/systemd/system/cri-docker.service.d/10-cni.conf (192 bytes)
	I1220 02:14:16.542267   38979 ssh_runner.go:195] Run: sudo systemctl unmask docker.service
	I1220 02:14:16.744157   38979 ssh_runner.go:195] Run: sudo systemctl enable docker.socket
	I1220 02:14:16.924495   38979 docker.go:575] configuring docker to use "cgroupfs" as cgroup driver...
	I1220 02:14:16.924658   38979 ssh_runner.go:362] scp memory --> /etc/docker/daemon.json (130 bytes)
	I1220 02:14:16.953858   38979 ssh_runner.go:195] Run: sudo systemctl reset-failed docker
	I1220 02:14:16.973889   38979 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1220 02:14:17.180489   38979 ssh_runner.go:195] Run: sudo systemctl restart docker
	I1220 02:14:17.720891   38979 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1220 02:14:17.740432   38979 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service cri-docker.socket
	I1220 02:14:17.756728   38979 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service cri-docker.service
	I1220 02:14:17.780803   38979 ssh_runner.go:195] Run: sudo systemctl unmask cri-docker.socket
	I1220 02:14:17.958835   38979 ssh_runner.go:195] Run: sudo systemctl enable cri-docker.socket
	I1220 02:14:18.121422   38979 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1220 02:14:18.283915   38979 ssh_runner.go:195] Run: sudo systemctl restart cri-docker.socket
	I1220 02:14:18.319068   38979 ssh_runner.go:195] Run: sudo systemctl reset-failed cri-docker.service
	I1220 02:14:18.334630   38979 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1220 02:14:18.486080   38979 ssh_runner.go:195] Run: sudo systemctl restart cri-docker.service
	I1220 02:14:18.616715   38979 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service cri-docker.service
	I1220 02:14:18.643324   38979 start.go:543] Will wait 60s for socket path /var/run/cri-dockerd.sock
	I1220 02:14:18.643397   38979 ssh_runner.go:195] Run: stat /var/run/cri-dockerd.sock
	I1220 02:14:18.649921   38979 start.go:564] Will wait 60s for crictl version
	I1220 02:14:18.649987   38979 ssh_runner.go:195] Run: which crictl
	I1220 02:14:18.655062   38979 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I1220 02:14:18.692451   38979 start.go:580] Version:  0.1.0
	RuntimeName:  docker
	RuntimeVersion:  28.5.2
	RuntimeApiVersion:  v1
	I1220 02:14:18.692517   38979 ssh_runner.go:195] Run: docker version --format {{.Server.Version}}
	I1220 02:14:18.725655   38979 ssh_runner.go:195] Run: docker version --format {{.Server.Version}}
	I1220 02:14:21.464469   37762 system_pods.go:86] 9 kube-system pods found
	I1220 02:14:21.464520   37762 system_pods.go:89] "calico-kube-controllers-5c676f698c-5plhl" [8be42fe3-d58c-4bbf-9c51-8b689e28f671] Pending / Ready:ContainersNotReady (containers with unready status: [calico-kube-controllers]) / ContainersReady:ContainersNotReady (containers with unready status: [calico-kube-controllers])
	I1220 02:14:21.464535   37762 system_pods.go:89] "calico-node-d2xvk" [ebf1dd8e-ce6b-4cf7-a858-755e70edd320] Running / Ready:ContainersNotReady (containers with unready status: [calico-node]) / ContainersReady:ContainersNotReady (containers with unready status: [calico-node])
	I1220 02:14:21.464548   37762 system_pods.go:89] "coredns-66bc5c9577-hd2kg" [6f327eef-5be3-4358-8bf0-be3e0e9a13f1] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1220 02:14:21.464554   37762 system_pods.go:89] "etcd-calico-503505" [7fed5e30-e972-43f8-9ef7-7261b7216d7c] Running
	I1220 02:14:21.464561   37762 system_pods.go:89] "kube-apiserver-calico-503505" [b81cb7dd-efa9-428d-b155-4c8d4fcb5566] Running
	I1220 02:14:21.464567   37762 system_pods.go:89] "kube-controller-manager-calico-503505" [4a7a0d88-4103-46e7-8090-72b0e9d91c39] Running
	I1220 02:14:21.464575   37762 system_pods.go:89] "kube-proxy-gzr82" [6a0b0ea5-98c0-4762-9423-1c10dee4576e] Running
	I1220 02:14:21.464580   37762 system_pods.go:89] "kube-scheduler-calico-503505" [ecad390b-bd72-4f11-82fd-6544060a23c7] Running
	I1220 02:14:21.464585   37762 system_pods.go:89] "storage-provisioner" [742d00f3-5d72-488a-afa1-1fcd40398cf6] Running
	I1220 02:14:21.464605   37762 retry.go:31] will retry after 4.407665546s: missing components: kube-dns
	W1220 02:14:18.670754   37878 node_ready.go:57] node "custom-flannel-503505" has "Ready":"False" status (will retry)
	I1220 02:14:20.670484   37878 node_ready.go:49] node "custom-flannel-503505" is "Ready"
	I1220 02:14:20.670541   37878 node_ready.go:38] duration metric: took 9.00398985s for node "custom-flannel-503505" to be "Ready" ...
	I1220 02:14:20.670564   37878 api_server.go:52] waiting for apiserver process to appear ...
	I1220 02:14:20.670694   37878 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1220 02:14:20.707244   37878 api_server.go:72] duration metric: took 9.939328273s to wait for apiserver process to appear ...
	I1220 02:14:20.707279   37878 api_server.go:88] waiting for apiserver healthz status ...
	I1220 02:14:20.707301   37878 api_server.go:253] Checking apiserver healthz at https://192.168.72.110:8443/healthz ...
	I1220 02:14:20.717863   37878 api_server.go:279] https://192.168.72.110:8443/healthz returned 200:
	ok
	I1220 02:14:20.719853   37878 api_server.go:141] control plane version: v1.34.3
	I1220 02:14:20.719883   37878 api_server.go:131] duration metric: took 12.596477ms to wait for apiserver health ...
	I1220 02:14:20.719893   37878 system_pods.go:43] waiting for kube-system pods to appear ...
	I1220 02:14:20.726999   37878 system_pods.go:59] 7 kube-system pods found
	I1220 02:14:20.727077   37878 system_pods.go:61] "coredns-66bc5c9577-sqfn8" [5049fdb9-e7ad-4399-81f2-09401dc596ee] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1220 02:14:20.727090   37878 system_pods.go:61] "etcd-custom-flannel-503505" [f4d689e5-b742-43fc-9410-5bd64799d7ca] Running
	I1220 02:14:20.727099   37878 system_pods.go:61] "kube-apiserver-custom-flannel-503505" [9edfcf3a-ac6f-45c4-85e3-989d63d60395] Running
	I1220 02:14:20.727106   37878 system_pods.go:61] "kube-controller-manager-custom-flannel-503505" [2afe8125-f900-4f10-ac33-2aa361fb7c20] Running
	I1220 02:14:20.727112   37878 system_pods.go:61] "kube-proxy-9kg7f" [5ca03971-c23b-486e-9469-cbff81fb30de] Running
	I1220 02:14:20.727121   37878 system_pods.go:61] "kube-scheduler-custom-flannel-503505" [c4dcfa40-9627-4319-9f90-443f6964a9ec] Running
	I1220 02:14:20.727128   37878 system_pods.go:61] "storage-provisioner" [412afff2-e1a9-4433-8599-0976c8111dbe] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I1220 02:14:20.727138   37878 system_pods.go:74] duration metric: took 7.237909ms to wait for pod list to return data ...
	I1220 02:14:20.727152   37878 default_sa.go:34] waiting for default service account to be created ...
	I1220 02:14:20.738383   37878 default_sa.go:45] found service account: "default"
	I1220 02:14:20.738419   37878 default_sa.go:55] duration metric: took 11.258578ms for default service account to be created ...
	I1220 02:14:20.738431   37878 system_pods.go:116] waiting for k8s-apps to be running ...
	I1220 02:14:20.755625   37878 system_pods.go:86] 7 kube-system pods found
	I1220 02:14:20.755705   37878 system_pods.go:89] "coredns-66bc5c9577-sqfn8" [5049fdb9-e7ad-4399-81f2-09401dc596ee] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1220 02:14:20.755715   37878 system_pods.go:89] "etcd-custom-flannel-503505" [f4d689e5-b742-43fc-9410-5bd64799d7ca] Running
	I1220 02:14:20.755743   37878 system_pods.go:89] "kube-apiserver-custom-flannel-503505" [9edfcf3a-ac6f-45c4-85e3-989d63d60395] Running
	I1220 02:14:20.755825   37878 system_pods.go:89] "kube-controller-manager-custom-flannel-503505" [2afe8125-f900-4f10-ac33-2aa361fb7c20] Running
	I1220 02:14:20.755884   37878 system_pods.go:89] "kube-proxy-9kg7f" [5ca03971-c23b-486e-9469-cbff81fb30de] Running
	I1220 02:14:20.755901   37878 system_pods.go:89] "kube-scheduler-custom-flannel-503505" [c4dcfa40-9627-4319-9f90-443f6964a9ec] Running
	I1220 02:14:20.755939   37878 system_pods.go:89] "storage-provisioner" [412afff2-e1a9-4433-8599-0976c8111dbe] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I1220 02:14:20.755969   37878 retry.go:31] will retry after 243.64974ms: missing components: kube-dns
	I1220 02:14:21.008936   37878 system_pods.go:86] 7 kube-system pods found
	I1220 02:14:21.008998   37878 system_pods.go:89] "coredns-66bc5c9577-sqfn8" [5049fdb9-e7ad-4399-81f2-09401dc596ee] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1220 02:14:21.009009   37878 system_pods.go:89] "etcd-custom-flannel-503505" [f4d689e5-b742-43fc-9410-5bd64799d7ca] Running
	I1220 02:14:21.009018   37878 system_pods.go:89] "kube-apiserver-custom-flannel-503505" [9edfcf3a-ac6f-45c4-85e3-989d63d60395] Running
	I1220 02:14:21.009025   37878 system_pods.go:89] "kube-controller-manager-custom-flannel-503505" [2afe8125-f900-4f10-ac33-2aa361fb7c20] Running
	I1220 02:14:21.009033   37878 system_pods.go:89] "kube-proxy-9kg7f" [5ca03971-c23b-486e-9469-cbff81fb30de] Running
	I1220 02:14:21.009041   37878 system_pods.go:89] "kube-scheduler-custom-flannel-503505" [c4dcfa40-9627-4319-9f90-443f6964a9ec] Running
	I1220 02:14:21.009076   37878 system_pods.go:89] "storage-provisioner" [412afff2-e1a9-4433-8599-0976c8111dbe] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I1220 02:14:21.009102   37878 retry.go:31] will retry after 302.021984ms: missing components: kube-dns
	I1220 02:14:21.324004   37878 system_pods.go:86] 7 kube-system pods found
	I1220 02:14:21.324091   37878 system_pods.go:89] "coredns-66bc5c9577-sqfn8" [5049fdb9-e7ad-4399-81f2-09401dc596ee] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1220 02:14:21.324103   37878 system_pods.go:89] "etcd-custom-flannel-503505" [f4d689e5-b742-43fc-9410-5bd64799d7ca] Running
	I1220 02:14:21.324111   37878 system_pods.go:89] "kube-apiserver-custom-flannel-503505" [9edfcf3a-ac6f-45c4-85e3-989d63d60395] Running
	I1220 02:14:21.324118   37878 system_pods.go:89] "kube-controller-manager-custom-flannel-503505" [2afe8125-f900-4f10-ac33-2aa361fb7c20] Running
	I1220 02:14:21.324136   37878 system_pods.go:89] "kube-proxy-9kg7f" [5ca03971-c23b-486e-9469-cbff81fb30de] Running
	I1220 02:14:21.324144   37878 system_pods.go:89] "kube-scheduler-custom-flannel-503505" [c4dcfa40-9627-4319-9f90-443f6964a9ec] Running
	I1220 02:14:21.324156   37878 system_pods.go:89] "storage-provisioner" [412afff2-e1a9-4433-8599-0976c8111dbe] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I1220 02:14:21.324175   37878 retry.go:31] will retry after 335.232555ms: missing components: kube-dns
	I1220 02:14:21.666742   37878 system_pods.go:86] 7 kube-system pods found
	I1220 02:14:21.666783   37878 system_pods.go:89] "coredns-66bc5c9577-sqfn8" [5049fdb9-e7ad-4399-81f2-09401dc596ee] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1220 02:14:21.666790   37878 system_pods.go:89] "etcd-custom-flannel-503505" [f4d689e5-b742-43fc-9410-5bd64799d7ca] Running
	I1220 02:14:21.666796   37878 system_pods.go:89] "kube-apiserver-custom-flannel-503505" [9edfcf3a-ac6f-45c4-85e3-989d63d60395] Running
	I1220 02:14:21.666800   37878 system_pods.go:89] "kube-controller-manager-custom-flannel-503505" [2afe8125-f900-4f10-ac33-2aa361fb7c20] Running
	I1220 02:14:21.666804   37878 system_pods.go:89] "kube-proxy-9kg7f" [5ca03971-c23b-486e-9469-cbff81fb30de] Running
	I1220 02:14:21.666807   37878 system_pods.go:89] "kube-scheduler-custom-flannel-503505" [c4dcfa40-9627-4319-9f90-443f6964a9ec] Running
	I1220 02:14:21.666811   37878 system_pods.go:89] "storage-provisioner" [412afff2-e1a9-4433-8599-0976c8111dbe] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I1220 02:14:21.666827   37878 retry.go:31] will retry after 489.261855ms: missing components: kube-dns
	I1220 02:14:22.160484   37878 system_pods.go:86] 7 kube-system pods found
	I1220 02:14:22.160513   37878 system_pods.go:89] "coredns-66bc5c9577-sqfn8" [5049fdb9-e7ad-4399-81f2-09401dc596ee] Running
	I1220 02:14:22.160519   37878 system_pods.go:89] "etcd-custom-flannel-503505" [f4d689e5-b742-43fc-9410-5bd64799d7ca] Running
	I1220 02:14:22.160523   37878 system_pods.go:89] "kube-apiserver-custom-flannel-503505" [9edfcf3a-ac6f-45c4-85e3-989d63d60395] Running
	I1220 02:14:22.160527   37878 system_pods.go:89] "kube-controller-manager-custom-flannel-503505" [2afe8125-f900-4f10-ac33-2aa361fb7c20] Running
	I1220 02:14:22.160530   37878 system_pods.go:89] "kube-proxy-9kg7f" [5ca03971-c23b-486e-9469-cbff81fb30de] Running
	I1220 02:14:22.160533   37878 system_pods.go:89] "kube-scheduler-custom-flannel-503505" [c4dcfa40-9627-4319-9f90-443f6964a9ec] Running
	I1220 02:14:22.160536   37878 system_pods.go:89] "storage-provisioner" [412afff2-e1a9-4433-8599-0976c8111dbe] Running
	I1220 02:14:22.160543   37878 system_pods.go:126] duration metric: took 1.422105753s to wait for k8s-apps to be running ...
	I1220 02:14:22.160550   37878 system_svc.go:44] waiting for kubelet service to be running ....
	I1220 02:14:22.160597   37878 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1220 02:14:22.178117   37878 system_svc.go:56] duration metric: took 17.557366ms WaitForService to wait for kubelet
	I1220 02:14:22.178151   37878 kubeadm.go:587] duration metric: took 11.410240532s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1220 02:14:22.178236   37878 node_conditions.go:102] verifying NodePressure condition ...
	I1220 02:14:22.182122   37878 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I1220 02:14:22.182155   37878 node_conditions.go:123] node cpu capacity is 2
	I1220 02:14:22.182173   37878 node_conditions.go:105] duration metric: took 3.930704ms to run NodePressure ...
	I1220 02:14:22.182187   37878 start.go:242] waiting for startup goroutines ...
	I1220 02:14:22.182208   37878 start.go:247] waiting for cluster config update ...
	I1220 02:14:22.182225   37878 start.go:256] writing updated cluster config ...
	I1220 02:14:22.184970   37878 ssh_runner.go:195] Run: rm -f paused
	I1220 02:14:22.191086   37878 pod_ready.go:37] extra waiting up to 4m0s for all "kube-system" pods having one of [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] labels to be "Ready" ...
	I1220 02:14:22.195279   37878 pod_ready.go:83] waiting for pod "coredns-66bc5c9577-sqfn8" in "kube-system" namespace to be "Ready" or be gone ...
	I1220 02:14:22.199785   37878 pod_ready.go:94] pod "coredns-66bc5c9577-sqfn8" is "Ready"
	I1220 02:14:22.199808   37878 pod_ready.go:86] duration metric: took 4.507285ms for pod "coredns-66bc5c9577-sqfn8" in "kube-system" namespace to be "Ready" or be gone ...
	I1220 02:14:22.202062   37878 pod_ready.go:83] waiting for pod "etcd-custom-flannel-503505" in "kube-system" namespace to be "Ready" or be gone ...
	I1220 02:14:22.207096   37878 pod_ready.go:94] pod "etcd-custom-flannel-503505" is "Ready"
	I1220 02:14:22.207127   37878 pod_ready.go:86] duration metric: took 5.03686ms for pod "etcd-custom-flannel-503505" in "kube-system" namespace to be "Ready" or be gone ...
	I1220 02:14:22.209865   37878 pod_ready.go:83] waiting for pod "kube-apiserver-custom-flannel-503505" in "kube-system" namespace to be "Ready" or be gone ...
	I1220 02:14:22.214503   37878 pod_ready.go:94] pod "kube-apiserver-custom-flannel-503505" is "Ready"
	I1220 02:14:22.214540   37878 pod_ready.go:86] duration metric: took 4.645938ms for pod "kube-apiserver-custom-flannel-503505" in "kube-system" namespace to be "Ready" or be gone ...
	I1220 02:14:22.217066   37878 pod_ready.go:83] waiting for pod "kube-controller-manager-custom-flannel-503505" in "kube-system" namespace to be "Ready" or be gone ...
	I1220 02:14:22.755631   37878 pod_ready.go:94] pod "kube-controller-manager-custom-flannel-503505" is "Ready"
	I1220 02:14:22.755662   37878 pod_ready.go:86] duration metric: took 538.56085ms for pod "kube-controller-manager-custom-flannel-503505" in "kube-system" namespace to be "Ready" or be gone ...
	I1220 02:14:22.930705   37878 pod_ready.go:83] waiting for pod "kube-proxy-9kg7f" in "kube-system" namespace to be "Ready" or be gone ...
	I1220 02:14:18.758400   38979 out.go:252] * Preparing Kubernetes v1.34.3 on Docker 28.5.2 ...
	I1220 02:14:18.761850   38979 main.go:144] libmachine: domain false-503505 has defined MAC address 52:54:00:4e:1e:41 in network mk-false-503505
	I1220 02:14:18.762426   38979 main.go:144] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:4e:1e:41", ip: ""} in network mk-false-503505: {Iface:virbr3 ExpiryTime:2025-12-20 03:14:09 +0000 UTC Type:0 Mac:52:54:00:4e:1e:41 Iaid: IPaddr:192.168.61.177 Prefix:24 Hostname:false-503505 Clientid:01:52:54:00:4e:1e:41}
	I1220 02:14:18.762460   38979 main.go:144] libmachine: domain false-503505 has defined IP address 192.168.61.177 and MAC address 52:54:00:4e:1e:41 in network mk-false-503505
	I1220 02:14:18.762677   38979 ssh_runner.go:195] Run: grep 192.168.61.1	host.minikube.internal$ /etc/hosts
	I1220 02:14:18.767521   38979 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.61.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1220 02:14:18.787918   38979 kubeadm.go:884] updating cluster {Name:false-503505 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/22186/minikube-v1.37.0-1765965980-22186-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765966054-22186@sha256:1c173489767e6632c410d2554f1a2272f032a423dd528157e201daadfe3c43f0 Memory:3072 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.3
ClusterName:false-503505 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:false} Nodes:[{Name: IP:192.168.61.177 Port:8443 KubernetesVersion:v1.34.3 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror
: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I1220 02:14:18.788080   38979 preload.go:188] Checking if preload exists for k8s version v1.34.3 and runtime docker
	I1220 02:14:18.788143   38979 ssh_runner.go:195] Run: docker images --format {{.Repository}}:{{.Tag}}
	I1220 02:14:18.811874   38979 docker.go:691] Got preloaded images: 
	I1220 02:14:18.811902   38979 docker.go:697] registry.k8s.io/kube-apiserver:v1.34.3 wasn't preloaded
	I1220 02:14:18.811987   38979 ssh_runner.go:195] Run: sudo cat /var/lib/docker/image/overlay2/repositories.json
	I1220 02:14:18.825027   38979 ssh_runner.go:195] Run: which lz4
	I1220 02:14:18.831228   38979 ssh_runner.go:195] Run: stat -c "%s %y" /preloaded.tar.lz4
	I1220 02:14:18.836810   38979 ssh_runner.go:352] existence check for /preloaded.tar.lz4: stat -c "%s %y" /preloaded.tar.lz4: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/preloaded.tar.lz4': No such file or directory
	I1220 02:14:18.836859   38979 ssh_runner.go:362] scp /home/minitest/minikube-integration/7cd9f41b7421760cf1f1eaa8725bdb975037b06d-7160/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.3-docker-overlay2-amd64.tar.lz4 --> /preloaded.tar.lz4 (284304868 bytes)
	I1220 02:14:20.026096   38979 docker.go:655] duration metric: took 1.194964371s to copy over tarball
	I1220 02:14:20.026187   38979 ssh_runner.go:195] Run: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4
	I1220 02:14:21.663754   38979 ssh_runner.go:235] Completed: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4: (1.637534144s)
	I1220 02:14:21.663796   38979 ssh_runner.go:146] rm: /preloaded.tar.lz4
	I1220 02:14:21.714741   38979 ssh_runner.go:195] Run: sudo cat /var/lib/docker/image/overlay2/repositories.json
	I1220 02:14:21.728590   38979 ssh_runner.go:362] scp memory --> /var/lib/docker/image/overlay2/repositories.json (2632 bytes)
	I1220 02:14:21.753302   38979 ssh_runner.go:195] Run: sudo systemctl reset-failed docker
	I1220 02:14:21.775766   38979 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1220 02:14:21.965281   38979 ssh_runner.go:195] Run: sudo systemctl restart docker
	I1220 02:14:23.196123   37878 pod_ready.go:94] pod "kube-proxy-9kg7f" is "Ready"
	I1220 02:14:23.196152   37878 pod_ready.go:86] duration metric: took 265.420415ms for pod "kube-proxy-9kg7f" in "kube-system" namespace to be "Ready" or be gone ...
	I1220 02:14:23.395371   37878 pod_ready.go:83] waiting for pod "kube-scheduler-custom-flannel-503505" in "kube-system" namespace to be "Ready" or be gone ...
	I1220 02:14:23.797357   37878 pod_ready.go:94] pod "kube-scheduler-custom-flannel-503505" is "Ready"
	I1220 02:14:23.797395   37878 pod_ready.go:86] duration metric: took 401.98809ms for pod "kube-scheduler-custom-flannel-503505" in "kube-system" namespace to be "Ready" or be gone ...
	I1220 02:14:23.797413   37878 pod_ready.go:40] duration metric: took 1.606275744s for extra waiting for all "kube-system" pods having one of [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] labels to be "Ready" ...
	I1220 02:14:23.864427   37878 start.go:625] kubectl: 1.35.0, cluster: 1.34.3 (minor skew: 1)
	I1220 02:14:23.865897   37878 out.go:179] * Done! kubectl is now configured to use "custom-flannel-503505" cluster and "default" namespace by default
	
	
	==> Docker <==
	Dec 20 02:13:25 default-k8s-diff-port-032958 cri-dockerd[1567]: time="2025-12-20T02:13:25Z" level=info msg="Will attempt to re-write config file /var/lib/docker/containers/24384c9b6386768f183a17a14b0915b4c06115ceca79b379c9a8caeb87ac9be2/resolv.conf as [nameserver 10.96.0.10 search kubernetes-dashboard.svc.cluster.local svc.cluster.local cluster.local options ndots:5]"
	Dec 20 02:13:26 default-k8s-diff-port-032958 dockerd[1117]: time="2025-12-20T02:13:26.077359482Z" level=warning msg="reference for unknown type: " digest="sha256:2e500d29e9d5f4a086b908eb8dfe7ecac57d2ab09d65b24f588b1d449841ef93" remote="docker.io/kubernetesui/dashboard@sha256:2e500d29e9d5f4a086b908eb8dfe7ecac57d2ab09d65b24f588b1d449841ef93"
	Dec 20 02:13:33 default-k8s-diff-port-032958 cri-dockerd[1567]: time="2025-12-20T02:13:33Z" level=info msg="Stop pulling image docker.io/kubernetesui/dashboard:v2.7.0@sha256:2e500d29e9d5f4a086b908eb8dfe7ecac57d2ab09d65b24f588b1d449841ef93: Status: Downloaded newer image for kubernetesui/dashboard@sha256:2e500d29e9d5f4a086b908eb8dfe7ecac57d2ab09d65b24f588b1d449841ef93"
	Dec 20 02:13:33 default-k8s-diff-port-032958 dockerd[1117]: time="2025-12-20T02:13:33.166995649Z" level=warning msg="reference for unknown type: application/vnd.docker.distribution.manifest.v1+prettyjws" digest="sha256:5d99aa1120524c801bc8c1a7077e8f5ec122ba16b6dda1a5d3826057f67b9bcb" remote="registry.k8s.io/echoserver:1.4"
	Dec 20 02:13:33 default-k8s-diff-port-032958 dockerd[1117]: time="2025-12-20T02:13:33.247637303Z" level=warning msg="Error persisting manifest" digest="sha256:5d99aa1120524c801bc8c1a7077e8f5ec122ba16b6dda1a5d3826057f67b9bcb" error="error committing manifest to content store: commit failed: unexpected commit digest sha256:eaee4c452b076cdb05b391ed7e75e1ad0aca136665875ab5d7e2f3d9f4675769, expected sha256:5d99aa1120524c801bc8c1a7077e8f5ec122ba16b6dda1a5d3826057f67b9bcb: failed precondition" remote="registry.k8s.io/echoserver:1.4"
	Dec 20 02:13:33 default-k8s-diff-port-032958 dockerd[1117]: time="2025-12-20T02:13:33.247742747Z" level=info msg="Attempting next endpoint for pull after error: Docker Image Format v1 and Docker Image manifest version 2, schema 1 support has been removed. Suggest the author of registry.k8s.io/echoserver:1.4 to upgrade the image to the OCI Format or Docker Image manifest v2, schema 2. More information at https://docs.docker.com/go/deprecated-image-specs/"
	Dec 20 02:13:33 default-k8s-diff-port-032958 cri-dockerd[1567]: time="2025-12-20T02:13:33Z" level=info msg="Stop pulling image registry.k8s.io/echoserver:1.4: 1.4: Pulling from echoserver"
	Dec 20 02:13:33 default-k8s-diff-port-032958 dockerd[1117]: time="2025-12-20T02:13:33.870943978Z" level=warning msg="Error getting v2 registry: Get \"https://fake.domain/v2/\": dial tcp: lookup fake.domain on 192.168.122.1:53: no such host"
	Dec 20 02:13:33 default-k8s-diff-port-032958 dockerd[1117]: time="2025-12-20T02:13:33.870972001Z" level=info msg="Attempting next endpoint for pull after error: Get \"https://fake.domain/v2/\": dial tcp: lookup fake.domain on 192.168.122.1:53: no such host"
	Dec 20 02:13:33 default-k8s-diff-port-032958 dockerd[1117]: time="2025-12-20T02:13:33.874954248Z" level=error msg="unexpected HTTP error handling" error="<nil>"
	Dec 20 02:13:33 default-k8s-diff-port-032958 dockerd[1117]: time="2025-12-20T02:13:33.875104860Z" level=error msg="Handler for POST /v1.46/images/create returned error: Get \"https://fake.domain/v2/\": dial tcp: lookup fake.domain on 192.168.122.1:53: no such host"
	Dec 20 02:13:46 default-k8s-diff-port-032958 dockerd[1117]: time="2025-12-20T02:13:46.013938388Z" level=error msg="Handler for POST /v1.51/containers/e389ed009c41/pause returned error: cannot pause container e389ed009c414813f08a16331049a1f7b81ae99102e1d3eee00456652f70d78e: OCI runtime pause failed: container not running"
	Dec 20 02:13:46 default-k8s-diff-port-032958 dockerd[1117]: time="2025-12-20T02:13:46.096234565Z" level=info msg="ignoring event" container=e389ed009c414813f08a16331049a1f7b81ae99102e1d3eee00456652f70d78e module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Dec 20 02:14:19 default-k8s-diff-port-032958 cri-dockerd[1567]: time="2025-12-20T02:14:19Z" level=error msg="error getting RW layer size for container ID 'f14a7d35a9c218a36064019d8d70cd5e2dc10c8fff7e745b9c07943ea6e37833': Error response from daemon: No such container: f14a7d35a9c218a36064019d8d70cd5e2dc10c8fff7e745b9c07943ea6e37833"
	Dec 20 02:14:19 default-k8s-diff-port-032958 cri-dockerd[1567]: time="2025-12-20T02:14:19Z" level=error msg="Set backoffDuration to : 1m0s for container ID 'f14a7d35a9c218a36064019d8d70cd5e2dc10c8fff7e745b9c07943ea6e37833'"
	Dec 20 02:14:20 default-k8s-diff-port-032958 cri-dockerd[1567]: time="2025-12-20T02:14:20Z" level=info msg="Failed to read pod IP from plugin/docker: networkPlugin cni failed on the status hook for pod \"coredns-66bc5c9577-j9fnc_kube-system\": CNI failed to retrieve network namespace path: cannot find network namespace for the terminated container \"c17f03aae9a804c2000dd7a7f2df0a5c0e11cb7cc45d2898ceeb917e335ab8a6\""
	Dec 20 02:14:20 default-k8s-diff-port-032958 cri-dockerd[1567]: time="2025-12-20T02:14:20Z" level=info msg="Docker cri received runtime config &RuntimeConfig{NetworkConfig:&NetworkConfig{PodCidr:10.244.0.0/24,},}"
	Dec 20 02:14:21 default-k8s-diff-port-032958 dockerd[1117]: time="2025-12-20T02:14:21.054814750Z" level=warning msg="reference for unknown type: application/vnd.docker.distribution.manifest.v1+prettyjws" digest="sha256:5d99aa1120524c801bc8c1a7077e8f5ec122ba16b6dda1a5d3826057f67b9bcb" remote="registry.k8s.io/echoserver:1.4"
	Dec 20 02:14:21 default-k8s-diff-port-032958 dockerd[1117]: time="2025-12-20T02:14:21.173663258Z" level=warning msg="Error persisting manifest" digest="sha256:5d99aa1120524c801bc8c1a7077e8f5ec122ba16b6dda1a5d3826057f67b9bcb" error="error committing manifest to content store: commit failed: unexpected commit digest sha256:eaee4c452b076cdb05b391ed7e75e1ad0aca136665875ab5d7e2f3d9f4675769, expected sha256:5d99aa1120524c801bc8c1a7077e8f5ec122ba16b6dda1a5d3826057f67b9bcb: failed precondition" remote="registry.k8s.io/echoserver:1.4"
	Dec 20 02:14:21 default-k8s-diff-port-032958 dockerd[1117]: time="2025-12-20T02:14:21.173805989Z" level=info msg="Attempting next endpoint for pull after error: Docker Image Format v1 and Docker Image manifest version 2, schema 1 support has been removed. Suggest the author of registry.k8s.io/echoserver:1.4 to upgrade the image to the OCI Format or Docker Image manifest v2, schema 2. More information at https://docs.docker.com/go/deprecated-image-specs/"
	Dec 20 02:14:21 default-k8s-diff-port-032958 cri-dockerd[1567]: time="2025-12-20T02:14:21Z" level=info msg="Stop pulling image registry.k8s.io/echoserver:1.4: 1.4: Pulling from echoserver"
	Dec 20 02:14:21 default-k8s-diff-port-032958 dockerd[1117]: time="2025-12-20T02:14:21.210054061Z" level=warning msg="Error getting v2 registry: Get \"https://fake.domain/v2/\": dial tcp: lookup fake.domain on 192.168.122.1:53: no such host"
	Dec 20 02:14:21 default-k8s-diff-port-032958 dockerd[1117]: time="2025-12-20T02:14:21.210106510Z" level=info msg="Attempting next endpoint for pull after error: Get \"https://fake.domain/v2/\": dial tcp: lookup fake.domain on 192.168.122.1:53: no such host"
	Dec 20 02:14:21 default-k8s-diff-port-032958 dockerd[1117]: time="2025-12-20T02:14:21.216155700Z" level=error msg="unexpected HTTP error handling" error="<nil>"
	Dec 20 02:14:21 default-k8s-diff-port-032958 dockerd[1117]: time="2025-12-20T02:14:21.216230216Z" level=error msg="Handler for POST /v1.46/images/create returned error: Get \"https://fake.domain/v2/\": dial tcp: lookup fake.domain on 192.168.122.1:53: no such host"
	
	
	==> container status <==
	CONTAINER           IMAGE                                                                                                 CREATED              STATE               NAME                      ATTEMPT             POD ID              POD                                                    NAMESPACE
	db82439a82773       6e38f40d628db                                                                                         5 seconds ago        Running             storage-provisioner       2                   b98cac4df9b58       storage-provisioner                                    kube-system
	3d0dc5e4eaf53       kubernetesui/dashboard@sha256:2e500d29e9d5f4a086b908eb8dfe7ecac57d2ab09d65b24f588b1d449841ef93        52 seconds ago       Running             kubernetes-dashboard      0                   c7214caee965e       kubernetes-dashboard-855c9754f9-v5f62                  kubernetes-dashboard
	bd3af300e51d6       56cc512116c8f                                                                                         About a minute ago   Running             busybox                   1                   620275c9345e0       busybox                                                default
	c9a7560c3855f       52546a367cc9e                                                                                         About a minute ago   Running             coredns                   1                   bd05cab39e53f       coredns-66bc5c9577-gjmjk                               kube-system
	e389ed009c414       6e38f40d628db                                                                                         About a minute ago   Exited              storage-provisioner       1                   b98cac4df9b58       storage-provisioner                                    kube-system
	8a1598184096c       36eef8e07bdd6                                                                                         About a minute ago   Running             kube-proxy                1                   fceaaba1c1db3       kube-proxy-22tlj                                       kube-system
	2808d78b661f8       aec12dadf56dd                                                                                         About a minute ago   Running             kube-scheduler            1                   6d3fddf7afe4b       kube-scheduler-default-k8s-diff-port-032958            kube-system
	5d487135b34c5       a3e246e9556e9                                                                                         About a minute ago   Running             etcd                      1                   57ad4b77ed607       etcd-default-k8s-diff-port-032958                      kube-system
	0be7d44211125       5826b25d990d7                                                                                         About a minute ago   Running             kube-controller-manager   1                   f7e02a8a528fa       kube-controller-manager-default-k8s-diff-port-032958   kube-system
	799ae6e77e4dc       aa27095f56193                                                                                         About a minute ago   Running             kube-apiserver            1                   c0277aff9f306       kube-apiserver-default-k8s-diff-port-032958            kube-system
	9a4671ba050b2       gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e   2 minutes ago        Exited              busybox                   0                   9bfc558dcff48       busybox                                                default
	aef0cd5a3775d       52546a367cc9e                                                                                         2 minutes ago        Exited              coredns                   0                   4e8574a6b885b       coredns-66bc5c9577-gjmjk                               kube-system
	696c72bae65f2       36eef8e07bdd6                                                                                         2 minutes ago        Exited              kube-proxy                0                   959487a2071a7       kube-proxy-22tlj                                       kube-system
	37cee352777b9       aa27095f56193                                                                                         3 minutes ago        Exited              kube-apiserver            0                   042ea7540f943       kube-apiserver-default-k8s-diff-port-032958            kube-system
	6955eb7dbb7a8       a3e246e9556e9                                                                                         3 minutes ago        Exited              etcd                      0                   1ae4fd44c2900       etcd-default-k8s-diff-port-032958                      kube-system
	bc3e91d6c19d6       5826b25d990d7                                                                                         3 minutes ago        Exited              kube-controller-manager   0                   ec2c7b618f7f7       kube-controller-manager-default-k8s-diff-port-032958   kube-system
	44fb178dfab72       aec12dadf56dd                                                                                         3 minutes ago        Exited              kube-scheduler            0                   010ff1a843791       kube-scheduler-default-k8s-diff-port-032958            kube-system
	
	
	==> coredns [aef0cd5a3775] <==
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[WARNING] plugin/kubernetes: starting server with unsynced Kubernetes API
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = 1b226df79860026c6a52e67daa10d7f0d57ec5b023288ec00c5e05f93523c894564e15b91770d3a07ae1cfbe861d15b37d4a0027e69c546ab112970993a3b03b
	CoreDNS-1.12.1
	linux/amd64, go1.24.1, 707c7c1
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.32.3/tools/cache/reflector.go:251: failed to list *v1.Namespace: Get "https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: i/o timeout
	[ERROR] plugin/kubernetes: Unhandled Error
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.32.3/tools/cache/reflector.go:251: failed to list *v1.EndpointSlice: Get "https://10.96.0.1:443/apis/discovery.k8s.io/v1/endpointslices?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: i/o timeout
	[ERROR] plugin/kubernetes: Unhandled Error
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.32.3/tools/cache/reflector.go:251: failed to list *v1.Service: Get "https://10.96.0.1:443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: i/o timeout
	[ERROR] plugin/kubernetes: Unhandled Error
	[INFO] Reloading
	[INFO] plugin/reload: Running configuration SHA512 = ecad3ac8c72227dcf0d7a418ea5051ee155dd74d241a13c4787cc61906568517b5647c8519c78ef2c6b724422ee4b03d6cfb27e9a87140163726e83184faf782
	[INFO] Reloading complete
	[INFO] SIGTERM: Shutting down servers then terminating
	[INFO] plugin/health: Going into lameduck mode for 5s
	
	
	==> coredns [c9a7560c3855] <==
	maxprocs: Leaving GOMAXPROCS=2: CPU quota undefined
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = ecad3ac8c72227dcf0d7a418ea5051ee155dd74d241a13c4787cc61906568517b5647c8519c78ef2c6b724422ee4b03d6cfb27e9a87140163726e83184faf782
	CoreDNS-1.12.1
	linux/amd64, go1.24.1, 707c7c1
	[INFO] 127.0.0.1:39548 - 58159 "HINFO IN 6794078486954714189.4770737732440681574. udp 57 false 512" NXDOMAIN qr,rd,ra 132 0.045655293s
	
	
	==> describe nodes <==
	Name:               default-k8s-diff-port-032958
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=default-k8s-diff-port-032958
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=7cd9f41b7421760cf1f1eaa8725bdb975037b06d
	                    minikube.k8s.io/name=default-k8s-diff-port-032958
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2025_12_20T02_11_24_0700
	                    minikube.k8s.io/version=v1.37.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Sat, 20 Dec 2025 02:11:20 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  default-k8s-diff-port-032958
	  AcquireTime:     <unset>
	  RenewTime:       Sat, 20 Dec 2025 02:14:19 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Sat, 20 Dec 2025 02:14:20 +0000   Sat, 20 Dec 2025 02:11:18 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Sat, 20 Dec 2025 02:14:20 +0000   Sat, 20 Dec 2025 02:11:18 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Sat, 20 Dec 2025 02:14:20 +0000   Sat, 20 Dec 2025 02:11:18 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Sat, 20 Dec 2025 02:14:20 +0000   Sat, 20 Dec 2025 02:13:24 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.83.139
	  Hostname:    default-k8s-diff-port-032958
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             3035908Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             3035908Ki
	  pods:               110
	System Info:
	  Machine ID:                 a22ece73f0a74620b511d2c9063270d7
	  System UUID:                a22ece73-f0a7-4620-b511-d2c9063270d7
	  Boot ID:                    3a1ecf6e-4165-4ac3-94cb-43972902c57c
	  Kernel Version:             6.6.95
	  OS Image:                   Buildroot 2025.02
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  docker://28.5.2
	  Kubelet Version:            v1.34.3
	  Kube-Proxy Version:         
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (11 in total)
	  Namespace                   Name                                                    CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                                    ------------  ----------  ---------------  -------------  ---
	  default                     busybox                                                 0 (0%)        0 (0%)      0 (0%)           0 (0%)         2m21s
	  kube-system                 coredns-66bc5c9577-gjmjk                                100m (5%)     0 (0%)      70Mi (2%)        170Mi (5%)     2m57s
	  kube-system                 etcd-default-k8s-diff-port-032958                       100m (5%)     0 (0%)      100Mi (3%)       0 (0%)         3m1s
	  kube-system                 kube-apiserver-default-k8s-diff-port-032958             250m (12%)    0 (0%)      0 (0%)           0 (0%)         3m2s
	  kube-system                 kube-controller-manager-default-k8s-diff-port-032958    200m (10%)    0 (0%)      0 (0%)           0 (0%)         3m1s
	  kube-system                 kube-proxy-22tlj                                        0 (0%)        0 (0%)      0 (0%)           0 (0%)         2m57s
	  kube-system                 kube-scheduler-default-k8s-diff-port-032958             100m (5%)     0 (0%)      0 (0%)           0 (0%)         3m1s
	  kube-system                 metrics-server-746fcd58dc-r9hzl                         100m (5%)     0 (0%)      200Mi (6%)       0 (0%)         2m11s
	  kube-system                 storage-provisioner                                     0 (0%)        0 (0%)      0 (0%)           0 (0%)         2m54s
	  kubernetes-dashboard        dashboard-metrics-scraper-6ffb444bf9-wzcc7              0 (0%)        0 (0%)      0 (0%)           0 (0%)         66s
	  kubernetes-dashboard        kubernetes-dashboard-855c9754f9-v5f62                   0 (0%)        0 (0%)      0 (0%)           0 (0%)         66s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests     Limits
	  --------           --------     ------
	  cpu                850m (42%)   0 (0%)
	  memory             370Mi (12%)  170Mi (5%)
	  ephemeral-storage  0 (0%)       0 (0%)
	  hugepages-2Mi      0 (0%)       0 (0%)
	Events:
	  Type     Reason                   Age                  From             Message
	  ----     ------                   ----                 ----             -------
	  Normal   Starting                 2m54s                kube-proxy       
	  Normal   Starting                 69s                  kube-proxy       
	  Normal   NodeAllocatableEnforced  3m9s                 kubelet          Updated Node Allocatable limit across pods
	  Normal   NodeHasSufficientMemory  3m9s (x8 over 3m9s)  kubelet          Node default-k8s-diff-port-032958 status is now: NodeHasSufficientMemory
	  Normal   NodeHasNoDiskPressure    3m9s (x8 over 3m9s)  kubelet          Node default-k8s-diff-port-032958 status is now: NodeHasNoDiskPressure
	  Normal   NodeHasSufficientPID     3m9s (x7 over 3m9s)  kubelet          Node default-k8s-diff-port-032958 status is now: NodeHasSufficientPID
	  Normal   Starting                 3m9s                 kubelet          Starting kubelet.
	  Normal   Starting                 3m2s                 kubelet          Starting kubelet.
	  Normal   NodeAllocatableEnforced  3m2s                 kubelet          Updated Node Allocatable limit across pods
	  Normal   NodeHasSufficientMemory  3m1s                 kubelet          Node default-k8s-diff-port-032958 status is now: NodeHasSufficientMemory
	  Normal   NodeHasNoDiskPressure    3m1s                 kubelet          Node default-k8s-diff-port-032958 status is now: NodeHasNoDiskPressure
	  Normal   NodeHasSufficientPID     3m1s                 kubelet          Node default-k8s-diff-port-032958 status is now: NodeHasSufficientPID
	  Normal   RegisteredNode           2m58s                node-controller  Node default-k8s-diff-port-032958 event: Registered Node default-k8s-diff-port-032958 in Controller
	  Normal   NodeReady                2m57s                kubelet          Node default-k8s-diff-port-032958 status is now: NodeReady
	  Normal   Starting                 76s                  kubelet          Starting kubelet.
	  Normal   NodeHasSufficientMemory  76s (x8 over 76s)    kubelet          Node default-k8s-diff-port-032958 status is now: NodeHasSufficientMemory
	  Normal   NodeHasNoDiskPressure    76s (x8 over 76s)    kubelet          Node default-k8s-diff-port-032958 status is now: NodeHasNoDiskPressure
	  Normal   NodeHasSufficientPID     76s (x7 over 76s)    kubelet          Node default-k8s-diff-port-032958 status is now: NodeHasSufficientPID
	  Normal   NodeAllocatableEnforced  76s                  kubelet          Updated Node Allocatable limit across pods
	  Warning  Rebooted                 71s                  kubelet          Node default-k8s-diff-port-032958 has been rebooted, boot id: 3a1ecf6e-4165-4ac3-94cb-43972902c57c
	  Normal   RegisteredNode           67s                  node-controller  Node default-k8s-diff-port-032958 event: Registered Node default-k8s-diff-port-032958 in Controller
	  Normal   Starting                 6s                   kubelet          Starting kubelet.
	  Normal   NodeAllocatableEnforced  5s                   kubelet          Updated Node Allocatable limit across pods
	  Normal   NodeHasSufficientMemory  5s                   kubelet          Node default-k8s-diff-port-032958 status is now: NodeHasSufficientMemory
	  Normal   NodeHasNoDiskPressure    5s                   kubelet          Node default-k8s-diff-port-032958 status is now: NodeHasNoDiskPressure
	  Normal   NodeHasSufficientPID     5s                   kubelet          Node default-k8s-diff-port-032958 status is now: NodeHasSufficientPID
	
	
	==> dmesg <==
	[Dec20 02:12] Booted with the nomodeset parameter. Only the system framebuffer will be available
	[  +0.000011] acpi PNP0A03:00: fail to add MMCONFIG information, can't access extended configuration space under this bridge
	[  +0.000038] platform regulatory.0: Direct firmware load for regulatory.db failed with error -2
	[  +0.003240] (rpcbind)[120]: rpcbind.service: Referenced but unset environment variable evaluates to an empty string: RPCBIND_OPTIONS
	[  +0.994669] NFSD: Using /var/lib/nfs/v4recovery as the NFSv4 state recovery directory
	[  +0.000027] NFSD: unable to find recovery directory /var/lib/nfs/v4recovery
	[  +0.000002] NFSD: Unable to initialize client recovery tracking! (-2)
	[  +0.151734] kauditd_printk_skb: 1 callbacks suppressed
	[Dec20 02:13] kauditd_printk_skb: 393 callbacks suppressed
	[  +0.106540] kauditd_printk_skb: 46 callbacks suppressed
	[  +5.723521] kauditd_printk_skb: 165 callbacks suppressed
	[  +3.591601] kauditd_printk_skb: 134 callbacks suppressed
	[  +0.607561] kauditd_printk_skb: 259 callbacks suppressed
	[  +0.307946] kauditd_printk_skb: 17 callbacks suppressed
	[Dec20 02:14] kauditd_printk_skb: 35 callbacks suppressed
	[  +5.244764] kauditd_printk_skb: 33 callbacks suppressed
	
	
	==> etcd [5d487135b34c] <==
	{"level":"warn","ts":"2025-12-20T02:13:13.342109Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:50660","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-20T02:13:13.352027Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:50684","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-20T02:13:13.369107Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:50704","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-20T02:13:13.385882Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:50724","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-20T02:13:13.394142Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:50748","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-20T02:13:13.401109Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:50766","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-20T02:13:13.409984Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:50786","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-20T02:13:13.418468Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:50796","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-20T02:13:13.428572Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:50810","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-20T02:13:13.434912Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:50852","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-20T02:13:13.445522Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:50868","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-20T02:13:13.454332Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:50884","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-20T02:13:13.465435Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:50906","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-20T02:13:13.483111Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:50914","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-20T02:13:13.490648Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:50946","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-20T02:13:13.499438Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:50956","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-20T02:13:13.572659Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:50980","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-20T02:13:29.160598Z","caller":"txn/util.go:93","msg":"apply request took too long","took":"521.564224ms","expected-duration":"100ms","prefix":"","request":"header:<ID:13052446816451747392 > lease_revoke:<id:35239b39868e0a7a>","response":"size:28"}
	{"level":"info","ts":"2025-12-20T02:13:29.161474Z","caller":"traceutil/trace.go:172","msg":"trace[1265080339] linearizableReadLoop","detail":"{readStateIndex:772; appliedIndex:771; }","duration":"411.486582ms","start":"2025-12-20T02:13:28.749972Z","end":"2025-12-20T02:13:29.161458Z","steps":["trace[1265080339] 'read index received'  (duration: 33.594µs)","trace[1265080339] 'applied index is now lower than readState.Index'  (duration: 411.451844ms)"],"step_count":2}
	{"level":"warn","ts":"2025-12-20T02:13:29.161591Z","caller":"txn/util.go:93","msg":"apply request took too long","took":"411.631732ms","expected-duration":"100ms","prefix":"read-only range ","request":"limit:1 keys_only:true ","response":"range_response_count:0 size:5"}
	{"level":"info","ts":"2025-12-20T02:13:29.161613Z","caller":"traceutil/trace.go:172","msg":"trace[1600534378] range","detail":"{range_begin:; range_end:; response_count:0; response_revision:727; }","duration":"411.662139ms","start":"2025-12-20T02:13:28.749943Z","end":"2025-12-20T02:13:29.161605Z","steps":["trace[1600534378] 'agreement among raft nodes before linearized reading'  (duration: 411.61436ms)"],"step_count":1}
	{"level":"warn","ts":"2025-12-20T02:13:29.162719Z","caller":"txn/util.go:93","msg":"apply request took too long","took":"311.7284ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/pods/kube-system/kube-scheduler-default-k8s-diff-port-032958\" limit:1 ","response":"range_response_count:1 size:5168"}
	{"level":"info","ts":"2025-12-20T02:13:29.163046Z","caller":"traceutil/trace.go:172","msg":"trace[1971843700] range","detail":"{range_begin:/registry/pods/kube-system/kube-scheduler-default-k8s-diff-port-032958; range_end:; response_count:1; response_revision:727; }","duration":"312.095123ms","start":"2025-12-20T02:13:28.850939Z","end":"2025-12-20T02:13:29.163034Z","steps":["trace[1971843700] 'agreement among raft nodes before linearized reading'  (duration: 311.117462ms)"],"step_count":1}
	{"level":"warn","ts":"2025-12-20T02:13:29.163083Z","caller":"v3rpc/interceptor.go:202","msg":"request stats","start time":"2025-12-20T02:13:28.850904Z","time spent":"312.166241ms","remote":"127.0.0.1:50178","response type":"/etcdserverpb.KV/Range","request count":0,"request size":74,"response count":1,"response size":5191,"request content":"key:\"/registry/pods/kube-system/kube-scheduler-default-k8s-diff-port-032958\" limit:1 "}
	{"level":"info","ts":"2025-12-20T02:13:30.222290Z","caller":"traceutil/trace.go:172","msg":"trace[252289306] transaction","detail":"{read_only:false; response_revision:728; number_of_response:1; }","duration":"269.402974ms","start":"2025-12-20T02:13:29.952867Z","end":"2025-12-20T02:13:30.222270Z","steps":["trace[252289306] 'process raft request'  (duration: 269.235053ms)"],"step_count":1}
	
	
	==> etcd [6955eb7dbb7a] <==
	{"level":"warn","ts":"2025-12-20T02:11:19.847959Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:60060","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-20T02:11:19.949918Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:60078","server-name":"","error":"EOF"}
	{"level":"info","ts":"2025-12-20T02:12:07.244895Z","caller":"traceutil/trace.go:172","msg":"trace[798266364] linearizableReadLoop","detail":"{readStateIndex:511; appliedIndex:511; }","duration":"213.060794ms","start":"2025-12-20T02:12:07.031794Z","end":"2025-12-20T02:12:07.244854Z","steps":["trace[798266364] 'read index received'  (duration: 213.055389ms)","trace[798266364] 'applied index is now lower than readState.Index'  (duration: 4.109µs)"],"step_count":2}
	{"level":"info","ts":"2025-12-20T02:12:07.245037Z","caller":"traceutil/trace.go:172","msg":"trace[286472680] transaction","detail":"{read_only:false; response_revision:494; number_of_response:1; }","duration":"297.72601ms","start":"2025-12-20T02:12:06.947300Z","end":"2025-12-20T02:12:07.245026Z","steps":["trace[286472680] 'process raft request'  (duration: 297.578574ms)"],"step_count":1}
	{"level":"warn","ts":"2025-12-20T02:12:07.245042Z","caller":"txn/util.go:93","msg":"apply request took too long","took":"213.193567ms","expected-duration":"100ms","prefix":"read-only range ","request":"limit:1 keys_only:true ","response":"range_response_count:0 size:5"}
	{"level":"info","ts":"2025-12-20T02:12:07.245100Z","caller":"traceutil/trace.go:172","msg":"trace[1257312239] range","detail":"{range_begin:; range_end:; response_count:0; response_revision:493; }","duration":"213.303974ms","start":"2025-12-20T02:12:07.031787Z","end":"2025-12-20T02:12:07.245091Z","steps":["trace[1257312239] 'agreement among raft nodes before linearized reading'  (duration: 213.173447ms)"],"step_count":1}
	{"level":"warn","ts":"2025-12-20T02:12:08.425636Z","caller":"txn/util.go:93","msg":"apply request took too long","took":"189.814071ms","expected-duration":"100ms","prefix":"","request":"header:<ID:13052446816422726242 > lease_revoke:<id:35239b39868e09cb>","response":"size:28"}
	{"level":"info","ts":"2025-12-20T02:12:15.681646Z","caller":"osutil/interrupt_unix.go:65","msg":"received signal; shutting down","signal":"terminated"}
	{"level":"info","ts":"2025-12-20T02:12:15.681764Z","caller":"embed/etcd.go:426","msg":"closing etcd server","name":"default-k8s-diff-port-032958","data-dir":"/var/lib/minikube/etcd","advertise-peer-urls":["https://192.168.83.139:2380"],"advertise-client-urls":["https://192.168.83.139:2379"]}
	{"level":"error","ts":"2025-12-20T02:12:15.681878Z","caller":"embed/etcd.go:912","msg":"setting up serving from embedded etcd failed.","error":"http: Server closed","stacktrace":"go.etcd.io/etcd/server/v3/embed.(*Etcd).errHandler\n\tgo.etcd.io/etcd/server/v3/embed/etcd.go:912\ngo.etcd.io/etcd/server/v3/embed.(*serveCtx).startHandler.func1\n\tgo.etcd.io/etcd/server/v3/embed/serve.go:90"}
	{"level":"error","ts":"2025-12-20T02:12:22.684233Z","caller":"embed/etcd.go:912","msg":"setting up serving from embedded etcd failed.","error":"http: Server closed","stacktrace":"go.etcd.io/etcd/server/v3/embed.(*Etcd).errHandler\n\tgo.etcd.io/etcd/server/v3/embed/etcd.go:912\ngo.etcd.io/etcd/server/v3/embed.(*serveCtx).startHandler.func1\n\tgo.etcd.io/etcd/server/v3/embed/serve.go:90"}
	{"level":"error","ts":"2025-12-20T02:12:22.686809Z","caller":"embed/etcd.go:912","msg":"setting up serving from embedded etcd failed.","error":"accept tcp 127.0.0.1:2381: use of closed network connection","stacktrace":"go.etcd.io/etcd/server/v3/embed.(*Etcd).errHandler\n\tgo.etcd.io/etcd/server/v3/embed/etcd.go:912\ngo.etcd.io/etcd/server/v3/embed.(*Etcd).startHandler.func1\n\tgo.etcd.io/etcd/server/v3/embed/etcd.go:906"}
	{"level":"info","ts":"2025-12-20T02:12:22.686860Z","caller":"etcdserver/server.go:1297","msg":"skipped leadership transfer for single voting member cluster","local-member-id":"911810311894b523","current-leader-member-id":"911810311894b523"}
	{"level":"info","ts":"2025-12-20T02:12:22.687961Z","caller":"etcdserver/server.go:2358","msg":"server has stopped; stopping storage version's monitor"}
	{"level":"info","ts":"2025-12-20T02:12:22.688006Z","caller":"etcdserver/server.go:2335","msg":"server has stopped; stopping cluster version's monitor"}
	{"level":"warn","ts":"2025-12-20T02:12:22.691490Z","caller":"embed/serve.go:245","msg":"stopping secure grpc server due to error","error":"accept tcp 127.0.0.1:2379: use of closed network connection"}
	{"level":"warn","ts":"2025-12-20T02:12:22.691626Z","caller":"embed/serve.go:247","msg":"stopped secure grpc server due to error","error":"accept tcp 127.0.0.1:2379: use of closed network connection"}
	{"level":"error","ts":"2025-12-20T02:12:22.691850Z","caller":"embed/etcd.go:912","msg":"setting up serving from embedded etcd failed.","error":"accept tcp 127.0.0.1:2379: use of closed network connection","stacktrace":"go.etcd.io/etcd/server/v3/embed.(*Etcd).errHandler\n\tgo.etcd.io/etcd/server/v3/embed/etcd.go:912\ngo.etcd.io/etcd/server/v3/embed.(*Etcd).startHandler.func1\n\tgo.etcd.io/etcd/server/v3/embed/etcd.go:906"}
	{"level":"warn","ts":"2025-12-20T02:12:22.692143Z","caller":"embed/serve.go:245","msg":"stopping secure grpc server due to error","error":"accept tcp 192.168.83.139:2379: use of closed network connection"}
	{"level":"warn","ts":"2025-12-20T02:12:22.692250Z","caller":"embed/serve.go:247","msg":"stopped secure grpc server due to error","error":"accept tcp 192.168.83.139:2379: use of closed network connection"}
	{"level":"error","ts":"2025-12-20T02:12:22.692290Z","caller":"embed/etcd.go:912","msg":"setting up serving from embedded etcd failed.","error":"accept tcp 192.168.83.139:2379: use of closed network connection","stacktrace":"go.etcd.io/etcd/server/v3/embed.(*Etcd).errHandler\n\tgo.etcd.io/etcd/server/v3/embed/etcd.go:912\ngo.etcd.io/etcd/server/v3/embed.(*Etcd).startHandler.func1\n\tgo.etcd.io/etcd/server/v3/embed/etcd.go:906"}
	{"level":"info","ts":"2025-12-20T02:12:22.695968Z","caller":"embed/etcd.go:621","msg":"stopping serving peer traffic","address":"192.168.83.139:2380"}
	{"level":"error","ts":"2025-12-20T02:12:22.696039Z","caller":"embed/etcd.go:912","msg":"setting up serving from embedded etcd failed.","error":"accept tcp 192.168.83.139:2380: use of closed network connection","stacktrace":"go.etcd.io/etcd/server/v3/embed.(*Etcd).errHandler\n\tgo.etcd.io/etcd/server/v3/embed/etcd.go:912\ngo.etcd.io/etcd/server/v3/embed.(*Etcd).startHandler.func1\n\tgo.etcd.io/etcd/server/v3/embed/etcd.go:906"}
	{"level":"info","ts":"2025-12-20T02:12:22.696144Z","caller":"embed/etcd.go:626","msg":"stopped serving peer traffic","address":"192.168.83.139:2380"}
	{"level":"info","ts":"2025-12-20T02:12:22.696154Z","caller":"embed/etcd.go:428","msg":"closed etcd server","name":"default-k8s-diff-port-032958","data-dir":"/var/lib/minikube/etcd","advertise-peer-urls":["https://192.168.83.139:2380"],"advertise-client-urls":["https://192.168.83.139:2379"]}
	
	
	==> kernel <==
	 02:14:25 up 1 min,  0 users,  load average: 1.25, 0.47, 0.17
	Linux default-k8s-diff-port-032958 6.6.95 #1 SMP PREEMPT_DYNAMIC Wed Dec 17 12:49:57 UTC 2025 x86_64 GNU/Linux
	PRETTY_NAME="Buildroot 2025.02"
	
	
	==> kube-apiserver [37cee352777b] <==
	W1220 02:12:24.853572       1 logging.go:55] [core] [Channel #191 SubChannel #193]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1220 02:12:24.853799       1 logging.go:55] [core] [Channel #227 SubChannel #229]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1220 02:12:24.886374       1 logging.go:55] [core] [Channel #87 SubChannel #89]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1220 02:12:24.979017       1 logging.go:55] [core] [Channel #9 SubChannel #11]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1220 02:12:25.000084       1 logging.go:55] [core] [Channel #207 SubChannel #209]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1220 02:12:25.002681       1 logging.go:55] [core] [Channel #251 SubChannel #253]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1220 02:12:25.055296       1 logging.go:55] [core] [Channel #219 SubChannel #221]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1220 02:12:25.076421       1 logging.go:55] [core] [Channel #75 SubChannel #77]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1220 02:12:25.117381       1 logging.go:55] [core] [Channel #115 SubChannel #117]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1220 02:12:25.121093       1 logging.go:55] [core] [Channel #183 SubChannel #185]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1220 02:12:25.174493       1 logging.go:55] [core] [Channel #139 SubChannel #141]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1220 02:12:25.192366       1 logging.go:55] [core] [Channel #147 SubChannel #149]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1220 02:12:25.290406       1 logging.go:55] [core] [Channel #262 SubChannel #263]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1220 02:12:25.339079       1 logging.go:55] [core] [Channel #103 SubChannel #105]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1220 02:12:25.360110       1 logging.go:55] [core] [Channel #131 SubChannel #133]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1220 02:12:25.370079       1 logging.go:55] [core] [Channel #111 SubChannel #113]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1220 02:12:25.387509       1 logging.go:55] [core] [Channel #199 SubChannel #201]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1220 02:12:25.392242       1 logging.go:55] [core] [Channel #39 SubChannel #41]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1220 02:12:25.418164       1 logging.go:55] [core] [Channel #91 SubChannel #93]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1220 02:12:25.463768       1 logging.go:55] [core] [Channel #119 SubChannel #121]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1220 02:12:25.491271       1 logging.go:55] [core] [Channel #231 SubChannel #233]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1220 02:12:25.551484       1 logging.go:55] [core] [Channel #4 SubChannel #6]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1220 02:12:25.672624       1 logging.go:55] [core] [Channel #171 SubChannel #173]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1220 02:12:25.708145       1 logging.go:55] [core] [Channel #203 SubChannel #205]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1220 02:12:25.738375       1 logging.go:55] [core] [Channel #211 SubChannel #213]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	
	
	==> kube-apiserver [799ae6e77e4d] <==
		, Header: map[Content-Type:[text/plain; charset=utf-8] X-Content-Type-Options:[nosniff]]
	 > logger="UnhandledError"
	I1220 02:13:15.375295       1 controller.go:109] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	E1220 02:13:15.375659       1 controller.go:113] "Unhandled Error" err="loading OpenAPI spec for \"v1beta1.metrics.k8s.io\" failed with: Error, could not get list of group versions for APIService" logger="UnhandledError"
	I1220 02:13:15.376472       1 controller.go:126] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	I1220 02:13:16.429720       1 handler.go:285] Adding GroupVersion metrics.k8s.io v1beta1 to ResourceManager
	I1220 02:13:17.061950       1 controller.go:667] quota admission added evaluator for: deployments.apps
	I1220 02:13:17.106313       1 controller.go:667] quota admission added evaluator for: daemonsets.apps
	I1220 02:13:17.143082       1 controller.go:667] quota admission added evaluator for: roles.rbac.authorization.k8s.io
	I1220 02:13:17.149304       1 controller.go:667] quota admission added evaluator for: rolebindings.rbac.authorization.k8s.io
	I1220 02:13:18.957068       1 controller.go:667] quota admission added evaluator for: endpoints
	I1220 02:13:18.991435       1 controller.go:667] quota admission added evaluator for: endpointslices.discovery.k8s.io
	I1220 02:13:19.164812       1 controller.go:667] quota admission added evaluator for: replicasets.apps
	I1220 02:13:19.310960       1 controller.go:667] quota admission added evaluator for: namespaces
	I1220 02:13:19.735545       1 alloc.go:328] "allocated clusterIPs" service="kubernetes-dashboard/kubernetes-dashboard" clusterIPs={"IPv4":"10.100.251.44"}
	I1220 02:13:19.755566       1 alloc.go:328] "allocated clusterIPs" service="kubernetes-dashboard/dashboard-metrics-scraper" clusterIPs={"IPv4":"10.100.131.142"}
	W1220 02:14:18.888986       1 handler_proxy.go:99] no RequestInfo found in the context
	E1220 02:14:18.889066       1 controller.go:102] "Unhandled Error" err=<
		loading OpenAPI spec for "v1beta1.metrics.k8s.io" failed with: failed to download v1beta1.metrics.k8s.io: failed to retrieve openAPI spec, http error: ResponseCode: 503, Body: service unavailable
		, Header: map[Content-Type:[text/plain; charset=utf-8] X-Content-Type-Options:[nosniff]]
	 > logger="UnhandledError"
	I1220 02:14:18.889082       1 controller.go:109] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	W1220 02:14:18.898096       1 handler_proxy.go:99] no RequestInfo found in the context
	E1220 02:14:18.898161       1 controller.go:113] "Unhandled Error" err="loading OpenAPI spec for \"v1beta1.metrics.k8s.io\" failed with: Error, could not get list of group versions for APIService" logger="UnhandledError"
	I1220 02:14:18.898178       1 controller.go:126] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	
	
	==> kube-controller-manager [0be7d4421112] <==
	I1220 02:13:18.949472       1 shared_informer.go:356] "Caches are synced" controller="PV protection"
	I1220 02:13:18.949789       1 shared_informer.go:356] "Caches are synced" controller="resource_claim"
	I1220 02:13:18.911345       1 shared_informer.go:356] "Caches are synced" controller="GC"
	I1220 02:13:18.951606       1 shared_informer.go:356] "Caches are synced" controller="crt configmap"
	I1220 02:13:18.954539       1 shared_informer.go:356] "Caches are synced" controller="job"
	I1220 02:13:18.928851       1 shared_informer.go:356] "Caches are synced" controller="resource quota"
	I1220 02:13:18.929424       1 shared_informer.go:356] "Caches are synced" controller="deployment"
	I1220 02:13:18.967844       1 shared_informer.go:356] "Caches are synced" controller="resource quota"
	I1220 02:13:18.972782       1 shared_informer.go:356] "Caches are synced" controller="stateful set"
	I1220 02:13:18.972915       1 shared_informer.go:356] "Caches are synced" controller="garbage collector"
	I1220 02:13:18.972963       1 garbagecollector.go:154] "Garbage collector: all resource monitors have synced" logger="garbage-collector-controller"
	I1220 02:13:18.972974       1 garbagecollector.go:157] "Proceeding to collect garbage" logger="garbage-collector-controller"
	I1220 02:13:18.978649       1 shared_informer.go:356] "Caches are synced" controller="garbage collector"
	I1220 02:13:19.003577       1 shared_informer.go:356] "Caches are synced" controller="ClusterRoleAggregator"
	E1220 02:13:19.507058       1 replica_set.go:587] "Unhandled Error" err="sync \"kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9\" failed with pods \"dashboard-metrics-scraper-6ffb444bf9-\" is forbidden: error looking up service account kubernetes-dashboard/kubernetes-dashboard: serviceaccount \"kubernetes-dashboard\" not found" logger="UnhandledError"
	E1220 02:13:19.557712       1 replica_set.go:587] "Unhandled Error" err="sync \"kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9\" failed with pods \"dashboard-metrics-scraper-6ffb444bf9-\" is forbidden: error looking up service account kubernetes-dashboard/kubernetes-dashboard: serviceaccount \"kubernetes-dashboard\" not found" logger="UnhandledError"
	E1220 02:13:19.579564       1 replica_set.go:587] "Unhandled Error" err="sync \"kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9\" failed with pods \"dashboard-metrics-scraper-6ffb444bf9-\" is forbidden: error looking up service account kubernetes-dashboard/kubernetes-dashboard: serviceaccount \"kubernetes-dashboard\" not found" logger="UnhandledError"
	E1220 02:13:19.584826       1 replica_set.go:587] "Unhandled Error" err="sync \"kubernetes-dashboard/kubernetes-dashboard-855c9754f9\" failed with pods \"kubernetes-dashboard-855c9754f9-\" is forbidden: error looking up service account kubernetes-dashboard/kubernetes-dashboard: serviceaccount \"kubernetes-dashboard\" not found" logger="UnhandledError"
	E1220 02:13:19.605680       1 replica_set.go:587] "Unhandled Error" err="sync \"kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9\" failed with pods \"dashboard-metrics-scraper-6ffb444bf9-\" is forbidden: error looking up service account kubernetes-dashboard/kubernetes-dashboard: serviceaccount \"kubernetes-dashboard\" not found" logger="UnhandledError"
	E1220 02:13:19.607173       1 replica_set.go:587] "Unhandled Error" err="sync \"kubernetes-dashboard/kubernetes-dashboard-855c9754f9\" failed with pods \"kubernetes-dashboard-855c9754f9-\" is forbidden: error looking up service account kubernetes-dashboard/kubernetes-dashboard: serviceaccount \"kubernetes-dashboard\" not found" logger="UnhandledError"
	E1220 02:13:19.612896       1 replica_set.go:587] "Unhandled Error" err="sync \"kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9\" failed with pods \"dashboard-metrics-scraper-6ffb444bf9-\" is forbidden: error looking up service account kubernetes-dashboard/kubernetes-dashboard: serviceaccount \"kubernetes-dashboard\" not found" logger="UnhandledError"
	E1220 02:13:19.619443       1 replica_set.go:587] "Unhandled Error" err="sync \"kubernetes-dashboard/kubernetes-dashboard-855c9754f9\" failed with pods \"kubernetes-dashboard-855c9754f9-\" is forbidden: error looking up service account kubernetes-dashboard/kubernetes-dashboard: serviceaccount \"kubernetes-dashboard\" not found" logger="UnhandledError"
	I1220 02:13:28.942844       1 node_lifecycle_controller.go:1044] "Controller detected that some Nodes are Ready. Exiting master disruption mode" logger="node-lifecycle-controller"
	E1220 02:14:18.972742       1 resource_quota_controller.go:446] "Unhandled Error" err="unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1" logger="UnhandledError"
	I1220 02:14:19.019968       1 garbagecollector.go:787] "failed to discover some groups" logger="garbage-collector-controller" groups="map[\"metrics.k8s.io/v1beta1\":\"stale GroupVersion discovery: metrics.k8s.io/v1beta1\"]"
	
	
	==> kube-controller-manager [bc3e91d6c19d] <==
	I1220 02:11:27.795191       1 shared_informer.go:349] "Waiting for caches to sync" controller="cidrallocator"
	I1220 02:11:27.795197       1 shared_informer.go:356] "Caches are synced" controller="cidrallocator"
	I1220 02:11:27.795206       1 shared_informer.go:356] "Caches are synced" controller="endpoint"
	I1220 02:11:27.795417       1 shared_informer.go:356] "Caches are synced" controller="resource_claim"
	I1220 02:11:27.804009       1 shared_informer.go:356] "Caches are synced" controller="service-cidr-controller"
	I1220 02:11:27.809262       1 range_allocator.go:428] "Set node PodCIDR" logger="node-ipam-controller" node="default-k8s-diff-port-032958" podCIDRs=["10.244.0.0/24"]
	I1220 02:11:27.814727       1 shared_informer.go:356] "Caches are synced" controller="PV protection"
	I1220 02:11:27.816005       1 shared_informer.go:356] "Caches are synced" controller="resource quota"
	I1220 02:11:27.818197       1 shared_informer.go:356] "Caches are synced" controller="taint-eviction-controller"
	I1220 02:11:27.827800       1 shared_informer.go:356] "Caches are synced" controller="garbage collector"
	I1220 02:11:27.835424       1 shared_informer.go:356] "Caches are synced" controller="taint"
	I1220 02:11:27.835598       1 node_lifecycle_controller.go:1221] "Initializing eviction metric for zone" logger="node-lifecycle-controller" zone=""
	I1220 02:11:27.835777       1 node_lifecycle_controller.go:873] "Missing timestamp for Node. Assuming now as a timestamp" logger="node-lifecycle-controller" node="default-k8s-diff-port-032958"
	I1220 02:11:27.835796       1 shared_informer.go:356] "Caches are synced" controller="certificate-csrsigning-kubelet-serving"
	I1220 02:11:27.835833       1 node_lifecycle_controller.go:1025] "Controller detected that all Nodes are not-Ready. Entering master disruption mode" logger="node-lifecycle-controller"
	I1220 02:11:27.835932       1 shared_informer.go:356] "Caches are synced" controller="garbage collector"
	I1220 02:11:27.835939       1 garbagecollector.go:154] "Garbage collector: all resource monitors have synced" logger="garbage-collector-controller"
	I1220 02:11:27.835945       1 garbagecollector.go:157] "Proceeding to collect garbage" logger="garbage-collector-controller"
	I1220 02:11:27.838763       1 shared_informer.go:356] "Caches are synced" controller="certificate-csrsigning-kube-apiserver-client"
	I1220 02:11:27.838905       1 shared_informer.go:356] "Caches are synced" controller="certificate-csrsigning-kubelet-client"
	I1220 02:11:27.838920       1 shared_informer.go:356] "Caches are synced" controller="certificate-csrsigning-legacy-unknown"
	I1220 02:11:27.843443       1 shared_informer.go:356] "Caches are synced" controller="persistent volume"
	I1220 02:11:27.845334       1 shared_informer.go:356] "Caches are synced" controller="ClusterRoleAggregator"
	I1220 02:11:27.853111       1 shared_informer.go:356] "Caches are synced" controller="resource quota"
	I1220 02:11:32.836931       1 node_lifecycle_controller.go:1044] "Controller detected that some Nodes are Ready. Exiting master disruption mode" logger="node-lifecycle-controller"
	
	
	==> kube-proxy [696c72bae65f] <==
	I1220 02:11:30.533772       1 shared_informer.go:349] "Waiting for caches to sync" controller="node informer cache"
	I1220 02:11:30.634441       1 shared_informer.go:356] "Caches are synced" controller="node informer cache"
	I1220 02:11:30.634675       1 server.go:219] "Successfully retrieved NodeIPs" NodeIPs=["192.168.83.139"]
	E1220 02:11:30.635231       1 server.go:256] "Kube-proxy configuration may be incomplete or incorrect" err="nodePortAddresses is unset; NodePort connections will be accepted on all local IPs. Consider using `--nodeport-addresses primary`"
	I1220 02:11:30.763679       1 server_linux.go:103] "No iptables support for family" ipFamily="IPv6" error=<
		error listing chain "POSTROUTING" in table "nat": exit status 3: ip6tables v1.8.9 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
		Perhaps ip6tables or your kernel needs to be upgraded.
	 >
	I1220 02:11:30.764391       1 server.go:267] "kube-proxy running in single-stack mode" ipFamily="IPv4"
	I1220 02:11:30.764588       1 server_linux.go:132] "Using iptables Proxier"
	I1220 02:11:30.801765       1 proxier.go:242] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses" ipFamily="IPv4"
	I1220 02:11:30.802104       1 server.go:527] "Version info" version="v1.34.3"
	I1220 02:11:30.802116       1 server.go:529] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1220 02:11:30.821963       1 config.go:309] "Starting node config controller"
	I1220 02:11:30.822050       1 shared_informer.go:349] "Waiting for caches to sync" controller="node config"
	I1220 02:11:30.822061       1 shared_informer.go:356] "Caches are synced" controller="node config"
	I1220 02:11:30.826798       1 config.go:200] "Starting service config controller"
	I1220 02:11:30.826954       1 shared_informer.go:349] "Waiting for caches to sync" controller="service config"
	I1220 02:11:30.829847       1 config.go:106] "Starting endpoint slice config controller"
	I1220 02:11:30.830754       1 shared_informer.go:349] "Waiting for caches to sync" controller="endpoint slice config"
	I1220 02:11:30.831058       1 config.go:403] "Starting serviceCIDR config controller"
	I1220 02:11:30.831070       1 shared_informer.go:349] "Waiting for caches to sync" controller="serviceCIDR config"
	I1220 02:11:30.937234       1 shared_informer.go:356] "Caches are synced" controller="serviceCIDR config"
	I1220 02:11:30.937307       1 shared_informer.go:356] "Caches are synced" controller="endpoint slice config"
	I1220 02:11:30.933586       1 shared_informer.go:356] "Caches are synced" controller="service config"
	
	
	==> kube-proxy [8a1598184096] <==
	I1220 02:13:16.000300       1 shared_informer.go:349] "Waiting for caches to sync" controller="node informer cache"
	I1220 02:13:16.100699       1 shared_informer.go:356] "Caches are synced" controller="node informer cache"
	I1220 02:13:16.100734       1 server.go:219] "Successfully retrieved NodeIPs" NodeIPs=["192.168.83.139"]
	E1220 02:13:16.100793       1 server.go:256] "Kube-proxy configuration may be incomplete or incorrect" err="nodePortAddresses is unset; NodePort connections will be accepted on all local IPs. Consider using `--nodeport-addresses primary`"
	I1220 02:13:16.145133       1 server_linux.go:103] "No iptables support for family" ipFamily="IPv6" error=<
		error listing chain "POSTROUTING" in table "nat": exit status 3: ip6tables v1.8.9 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
		Perhaps ip6tables or your kernel needs to be upgraded.
	 >
	I1220 02:13:16.145459       1 server.go:267] "kube-proxy running in single-stack mode" ipFamily="IPv4"
	I1220 02:13:16.145683       1 server_linux.go:132] "Using iptables Proxier"
	I1220 02:13:16.156575       1 proxier.go:242] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses" ipFamily="IPv4"
	I1220 02:13:16.157810       1 server.go:527] "Version info" version="v1.34.3"
	I1220 02:13:16.158021       1 server.go:529] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1220 02:13:16.162964       1 config.go:200] "Starting service config controller"
	I1220 02:13:16.162999       1 shared_informer.go:349] "Waiting for caches to sync" controller="service config"
	I1220 02:13:16.163014       1 config.go:106] "Starting endpoint slice config controller"
	I1220 02:13:16.163018       1 shared_informer.go:349] "Waiting for caches to sync" controller="endpoint slice config"
	I1220 02:13:16.163027       1 config.go:403] "Starting serviceCIDR config controller"
	I1220 02:13:16.163030       1 shared_informer.go:349] "Waiting for caches to sync" controller="serviceCIDR config"
	I1220 02:13:16.166161       1 config.go:309] "Starting node config controller"
	I1220 02:13:16.166330       1 shared_informer.go:349] "Waiting for caches to sync" controller="node config"
	I1220 02:13:16.166459       1 shared_informer.go:356] "Caches are synced" controller="node config"
	I1220 02:13:16.263219       1 shared_informer.go:356] "Caches are synced" controller="endpoint slice config"
	I1220 02:13:16.263310       1 shared_informer.go:356] "Caches are synced" controller="serviceCIDR config"
	I1220 02:13:16.263327       1 shared_informer.go:356] "Caches are synced" controller="service config"
	
	
	==> kube-scheduler [2808d78b661f] <==
	I1220 02:13:12.086051       1 serving.go:386] Generated self-signed cert in-memory
	I1220 02:13:14.442280       1 server.go:175] "Starting Kubernetes Scheduler" version="v1.34.3"
	I1220 02:13:14.442329       1 server.go:177] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1220 02:13:14.455168       1 requestheader_controller.go:180] Starting RequestHeaderAuthRequestController
	I1220 02:13:14.455457       1 shared_informer.go:349] "Waiting for caches to sync" controller="RequestHeaderAuthRequestController"
	I1220 02:13:14.455686       1 configmap_cafile_content.go:205] "Starting controller" name="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I1220 02:13:14.455746       1 shared_informer.go:349] "Waiting for caches to sync" controller="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I1220 02:13:14.455761       1 configmap_cafile_content.go:205] "Starting controller" name="client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file"
	I1220 02:13:14.455884       1 shared_informer.go:349] "Waiting for caches to sync" controller="client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file"
	I1220 02:13:14.456412       1 secure_serving.go:211] Serving securely on 127.0.0.1:10259
	I1220 02:13:14.456890       1 tlsconfig.go:243] "Starting DynamicServingCertificateController"
	I1220 02:13:14.556446       1 shared_informer.go:356] "Caches are synced" controller="client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file"
	I1220 02:13:14.556930       1 shared_informer.go:356] "Caches are synced" controller="RequestHeaderAuthRequestController"
	I1220 02:13:14.557255       1 shared_informer.go:356] "Caches are synced" controller="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	
	
	==> kube-scheduler [44fb178dfab7] <==
	E1220 02:11:20.961299       1 reflector.go:205] "Failed to watch" err="failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csinodes\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSINode"
	E1220 02:11:20.964924       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User \"system:kube-scheduler\" cannot list resource \"poddisruptionbudgets\" in API group \"policy\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.PodDisruptionBudget"
	E1220 02:11:20.965293       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ResourceSlice: resourceslices.resource.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"resourceslices\" in API group \"resource.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ResourceSlice"
	E1220 02:11:20.965512       1 reflector.go:205] "Failed to watch" err="failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"storageclasses\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.StorageClass"
	E1220 02:11:21.832650       1 reflector.go:205] "Failed to watch" err="failed to list *v1.DeviceClass: deviceclasses.resource.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"deviceclasses\" in API group \"resource.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.DeviceClass"
	E1220 02:11:21.832947       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Node: nodes is forbidden: User \"system:kube-scheduler\" cannot list resource \"nodes\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Node"
	E1220 02:11:21.847314       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User \"system:kube-scheduler\" cannot list resource \"replicationcontrollers\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ReplicationController"
	E1220 02:11:21.848248       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Namespace: namespaces is forbidden: User \"system:kube-scheduler\" cannot list resource \"namespaces\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Namespace"
	E1220 02:11:21.883915       1 reflector.go:205] "Failed to watch" err="failed to list *v1.VolumeAttachment: volumeattachments.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"volumeattachments\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.VolumeAttachment"
	E1220 02:11:21.922925       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Pod: pods is forbidden: User \"system:kube-scheduler\" cannot list resource \"pods\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Pod"
	E1220 02:11:21.948354       1 reflector.go:205] "Failed to watch" err="failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User \"system:kube-scheduler\" cannot list resource \"statefulsets\" in API group \"apps\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.StatefulSet"
	E1220 02:11:21.988956       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ResourceClaim: resourceclaims.resource.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"resourceclaims\" in API group \"resource.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ResourceClaim"
	E1220 02:11:22.072320       1 reflector.go:205] "Failed to watch" err="failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csinodes\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSINode"
	E1220 02:11:22.112122       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User \"system:kube-scheduler\" cannot list resource \"persistentvolumes\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.PersistentVolume"
	E1220 02:11:22.125974       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User \"system:kube-scheduler\" cannot list resource \"poddisruptionbudgets\" in API group \"policy\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.PodDisruptionBudget"
	E1220 02:11:22.146170       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User \"system:kube-scheduler\" cannot list resource \"persistentvolumeclaims\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.PersistentVolumeClaim"
	E1220 02:11:22.171595       1 reflector.go:205] "Failed to watch" err="failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csistoragecapacities\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIStorageCapacity"
	E1220 02:11:22.226735       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ConfigMap: configmaps \"extension-apiserver-authentication\" is forbidden: User \"system:kube-scheduler\" cannot list resource \"configmaps\" in API group \"\" in the namespace \"kube-system\"" logger="UnhandledError" reflector="runtime/asm_amd64.s:1700" type="*v1.ConfigMap"
	I1220 02:11:25.144517       1 shared_informer.go:356] "Caches are synced" controller="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I1220 02:12:15.706842       1 secure_serving.go:259] Stopped listening on 127.0.0.1:10259
	I1220 02:12:15.706898       1 server.go:263] "[graceful-termination] secure server has stopped listening"
	I1220 02:12:15.706917       1 tlsconfig.go:258] "Shutting down DynamicServingCertificateController"
	I1220 02:12:15.706972       1 configmap_cafile_content.go:226] "Shutting down controller" name="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I1220 02:12:15.707164       1 server.go:265] "[graceful-termination] secure server is exiting"
	E1220 02:12:15.707186       1 run.go:72] "command failed" err="finished without leader elect"
	
	
	==> kubelet <==
	Dec 20 02:14:20 default-k8s-diff-port-032958 kubelet[4206]: I1220 02:14:20.306089    4206 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="959487a2071a7d265b217d3aee2b7e4fbafb02bb0585f7ff40beae30aa17b725"
	Dec 20 02:14:20 default-k8s-diff-port-032958 kubelet[4206]: I1220 02:14:20.330357    4206 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="1ae4fd44c29005031aebaf78608172fd0e41f69bee4dd72c3ea114e035fc7e8e"
	Dec 20 02:14:20 default-k8s-diff-port-032958 kubelet[4206]: I1220 02:14:20.330548    4206 kubelet.go:3220] "Creating a mirror pod for static pod" pod="kube-system/etcd-default-k8s-diff-port-032958"
	Dec 20 02:14:20 default-k8s-diff-port-032958 kubelet[4206]: E1220 02:14:20.342746    4206 kubelet.go:3222] "Failed creating a mirror pod" err="pods \"etcd-default-k8s-diff-port-032958\" already exists" pod="kube-system/etcd-default-k8s-diff-port-032958"
	Dec 20 02:14:20 default-k8s-diff-port-032958 kubelet[4206]: I1220 02:14:20.617352    4206 apiserver.go:52] "Watching apiserver"
	Dec 20 02:14:20 default-k8s-diff-port-032958 kubelet[4206]: I1220 02:14:20.683834    4206 desired_state_of_world_populator.go:154] "Finished populating initial desired state of world"
	Dec 20 02:14:20 default-k8s-diff-port-032958 kubelet[4206]: I1220 02:14:20.742502    4206 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/07a41d99-89a6-4d25-b7cf-57f49fbdea5a-lib-modules\") pod \"kube-proxy-22tlj\" (UID: \"07a41d99-89a6-4d25-b7cf-57f49fbdea5a\") " pod="kube-system/kube-proxy-22tlj"
	Dec 20 02:14:20 default-k8s-diff-port-032958 kubelet[4206]: I1220 02:14:20.743177    4206 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/07a41d99-89a6-4d25-b7cf-57f49fbdea5a-xtables-lock\") pod \"kube-proxy-22tlj\" (UID: \"07a41d99-89a6-4d25-b7cf-57f49fbdea5a\") " pod="kube-system/kube-proxy-22tlj"
	Dec 20 02:14:20 default-k8s-diff-port-032958 kubelet[4206]: I1220 02:14:20.743223    4206 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tmp\" (UniqueName: \"kubernetes.io/host-path/a74ca514-b136-40a6-9fd7-27c96e23bca7-tmp\") pod \"storage-provisioner\" (UID: \"a74ca514-b136-40a6-9fd7-27c96e23bca7\") " pod="kube-system/storage-provisioner"
	Dec 20 02:14:20 default-k8s-diff-port-032958 kubelet[4206]: I1220 02:14:20.941330    4206 scope.go:117] "RemoveContainer" containerID="e389ed009c414813f08a16331049a1f7b81ae99102e1d3eee00456652f70d78e"
	Dec 20 02:14:21 default-k8s-diff-port-032958 kubelet[4206]: E1220 02:14:21.187714    4206 log.go:32] "PullImage from image service failed" err="rpc error: code = Unknown desc = Docker Image Format v1 and Docker Image manifest version 2, schema 1 support has been removed. Suggest the author of registry.k8s.io/echoserver:1.4 to upgrade the image to the OCI Format or Docker Image manifest v2, schema 2. More information at https://docs.docker.com/go/deprecated-image-specs/" image="registry.k8s.io/echoserver:1.4"
	Dec 20 02:14:21 default-k8s-diff-port-032958 kubelet[4206]: E1220 02:14:21.188551    4206 kuberuntime_image.go:43] "Failed to pull image" err="Docker Image Format v1 and Docker Image manifest version 2, schema 1 support has been removed. Suggest the author of registry.k8s.io/echoserver:1.4 to upgrade the image to the OCI Format or Docker Image manifest v2, schema 2. More information at https://docs.docker.com/go/deprecated-image-specs/" image="registry.k8s.io/echoserver:1.4"
	Dec 20 02:14:21 default-k8s-diff-port-032958 kubelet[4206]: E1220 02:14:21.190144    4206 kuberuntime_manager.go:1449] "Unhandled Error" err="container dashboard-metrics-scraper start failed in pod dashboard-metrics-scraper-6ffb444bf9-wzcc7_kubernetes-dashboard(6951d269-7815-46e0-bfd0-c9dba02d7a47): ErrImagePull: Docker Image Format v1 and Docker Image manifest version 2, schema 1 support has been removed. Suggest the author of registry.k8s.io/echoserver:1.4 to upgrade the image to the OCI Format or Docker Image manifest v2, schema 2. More information at https://docs.docker.com/go/deprecated-image-specs/" logger="UnhandledError"
	Dec 20 02:14:21 default-k8s-diff-port-032958 kubelet[4206]: E1220 02:14:21.191545    4206 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with ErrImagePull: \"Docker Image Format v1 and Docker Image manifest version 2, schema 1 support has been removed. Suggest the author of registry.k8s.io/echoserver:1.4 to upgrade the image to the OCI Format or Docker Image manifest v2, schema 2. More information at https://docs.docker.com/go/deprecated-image-specs/\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-wzcc7" podUID="6951d269-7815-46e0-bfd0-c9dba02d7a47"
	Dec 20 02:14:21 default-k8s-diff-port-032958 kubelet[4206]: E1220 02:14:21.218131    4206 log.go:32] "PullImage from image service failed" err="rpc error: code = Unknown desc = Error response from daemon: Get \"https://fake.domain/v2/\": dial tcp: lookup fake.domain on 192.168.122.1:53: no such host" image="fake.domain/registry.k8s.io/echoserver:1.4"
	Dec 20 02:14:21 default-k8s-diff-port-032958 kubelet[4206]: E1220 02:14:21.218192    4206 kuberuntime_image.go:43] "Failed to pull image" err="Error response from daemon: Get \"https://fake.domain/v2/\": dial tcp: lookup fake.domain on 192.168.122.1:53: no such host" image="fake.domain/registry.k8s.io/echoserver:1.4"
	Dec 20 02:14:21 default-k8s-diff-port-032958 kubelet[4206]: E1220 02:14:21.218346    4206 kuberuntime_manager.go:1449] "Unhandled Error" err="container metrics-server start failed in pod metrics-server-746fcd58dc-r9hzl_kube-system(ea98af6d-2555-48e1-9403-91cdbace7b1c): ErrImagePull: Error response from daemon: Get \"https://fake.domain/v2/\": dial tcp: lookup fake.domain on 192.168.122.1:53: no such host" logger="UnhandledError"
	Dec 20 02:14:21 default-k8s-diff-port-032958 kubelet[4206]: E1220 02:14:21.219866    4206 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ErrImagePull: \"Error response from daemon: Get \\\"https://fake.domain/v2/\\\": dial tcp: lookup fake.domain on 192.168.122.1:53: no such host\"" pod="kube-system/metrics-server-746fcd58dc-r9hzl" podUID="ea98af6d-2555-48e1-9403-91cdbace7b1c"
	Dec 20 02:14:21 default-k8s-diff-port-032958 kubelet[4206]: I1220 02:14:21.415555    4206 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="f14a7d35a9c218a36064019d8d70cd5e2dc10c8fff7e745b9c07943ea6e37833"
	Dec 20 02:14:21 default-k8s-diff-port-032958 kubelet[4206]: I1220 02:14:21.445678    4206 kubelet.go:3220] "Creating a mirror pod for static pod" pod="kube-system/etcd-default-k8s-diff-port-032958"
	Dec 20 02:14:21 default-k8s-diff-port-032958 kubelet[4206]: I1220 02:14:21.445968    4206 kubelet.go:3220] "Creating a mirror pod for static pod" pod="kube-system/kube-apiserver-default-k8s-diff-port-032958"
	Dec 20 02:14:21 default-k8s-diff-port-032958 kubelet[4206]: I1220 02:14:21.446323    4206 kubelet.go:3220] "Creating a mirror pod for static pod" pod="kube-system/kube-scheduler-default-k8s-diff-port-032958"
	Dec 20 02:14:21 default-k8s-diff-port-032958 kubelet[4206]: E1220 02:14:21.476713    4206 kubelet.go:3222] "Failed creating a mirror pod" err="pods \"kube-scheduler-default-k8s-diff-port-032958\" already exists" pod="kube-system/kube-scheduler-default-k8s-diff-port-032958"
	Dec 20 02:14:21 default-k8s-diff-port-032958 kubelet[4206]: E1220 02:14:21.478173    4206 kubelet.go:3222] "Failed creating a mirror pod" err="pods \"etcd-default-k8s-diff-port-032958\" already exists" pod="kube-system/etcd-default-k8s-diff-port-032958"
	Dec 20 02:14:21 default-k8s-diff-port-032958 kubelet[4206]: E1220 02:14:21.479470    4206 kubelet.go:3222] "Failed creating a mirror pod" err="pods \"kube-apiserver-default-k8s-diff-port-032958\" already exists" pod="kube-system/kube-apiserver-default-k8s-diff-port-032958"
	
	
	==> kubernetes-dashboard [3d0dc5e4eaf5] <==
	2025/12/20 02:13:33 Starting overwatch
	2025/12/20 02:13:33 Using namespace: kubernetes-dashboard
	2025/12/20 02:13:33 Using in-cluster config to connect to apiserver
	2025/12/20 02:13:33 Using secret token for csrf signing
	2025/12/20 02:13:33 Initializing csrf token from kubernetes-dashboard-csrf secret
	2025/12/20 02:13:33 Empty token. Generating and storing in a secret kubernetes-dashboard-csrf
	2025/12/20 02:13:33 Successful initial request to the apiserver, version: v1.34.3
	2025/12/20 02:13:33 Generating JWE encryption key
	2025/12/20 02:13:33 New synchronizer has been registered: kubernetes-dashboard-key-holder-kubernetes-dashboard. Starting
	2025/12/20 02:13:33 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kubernetes-dashboard
	2025/12/20 02:13:33 Initializing JWE encryption key from synchronized object
	2025/12/20 02:13:33 Creating in-cluster Sidecar client
	2025/12/20 02:13:33 Metric client health check failed: the server is currently unable to handle the request (get services dashboard-metrics-scraper). Retrying in 30 seconds.
	2025/12/20 02:13:33 Serving insecurely on HTTP port: 9090
	2025/12/20 02:14:19 Metric client health check failed: the server is currently unable to handle the request (get services dashboard-metrics-scraper). Retrying in 30 seconds.
	
	
	==> storage-provisioner [db82439a8277] <==
	I1220 02:14:21.324067       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	I1220 02:14:21.372255       1 storage_provisioner.go:141] Storage provisioner initialized, now starting service!
	I1220 02:14:21.373462       1 leaderelection.go:243] attempting to acquire leader lease kube-system/k8s.io-minikube-hostpath...
	W1220 02:14:21.382159       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1220 02:14:24.842229       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	
	
	==> storage-provisioner [e389ed009c41] <==
	I1220 02:13:15.862545       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	F1220 02:13:45.872285       1 main.go:39] error getting server version: Get "https://10.96.0.1:443/version?timeout=32s": dial tcp 10.96.0.1:443: i/o timeout
	

                                                
                                                
-- /stdout --
helpers_test.go:263: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p default-k8s-diff-port-032958 -n default-k8s-diff-port-032958
helpers_test.go:270: (dbg) Run:  kubectl --context default-k8s-diff-port-032958 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:281: non-running pods: metrics-server-746fcd58dc-r9hzl dashboard-metrics-scraper-6ffb444bf9-wzcc7
helpers_test.go:283: ======> post-mortem[TestStartStop/group/default-k8s-diff-port/serial/Pause]: describe non-running pods <======
helpers_test.go:286: (dbg) Run:  kubectl --context default-k8s-diff-port-032958 describe pod metrics-server-746fcd58dc-r9hzl dashboard-metrics-scraper-6ffb444bf9-wzcc7
helpers_test.go:286: (dbg) Non-zero exit: kubectl --context default-k8s-diff-port-032958 describe pod metrics-server-746fcd58dc-r9hzl dashboard-metrics-scraper-6ffb444bf9-wzcc7: exit status 1 (85.163393ms)

                                                
                                                
** stderr ** 
	Error from server (NotFound): pods "metrics-server-746fcd58dc-r9hzl" not found
	Error from server (NotFound): pods "dashboard-metrics-scraper-6ffb444bf9-wzcc7" not found

                                                
                                                
** /stderr **
helpers_test.go:288: kubectl --context default-k8s-diff-port-032958 describe pod metrics-server-746fcd58dc-r9hzl dashboard-metrics-scraper-6ffb444bf9-wzcc7: exit status 1
--- FAIL: TestStartStop/group/default-k8s-diff-port/serial/Pause (41.63s)

                                                
                                    

Test pass (409/456)

Order passed test Duration
3 TestDownloadOnly/v1.28.0/json-events 7.56
4 TestDownloadOnly/v1.28.0/preload-exists 0
8 TestDownloadOnly/v1.28.0/LogsDuration 0.07
9 TestDownloadOnly/v1.28.0/DeleteAll 0.15
10 TestDownloadOnly/v1.28.0/DeleteAlwaysSucceeds 0.13
12 TestDownloadOnly/v1.34.3/json-events 2.39
13 TestDownloadOnly/v1.34.3/preload-exists 0
17 TestDownloadOnly/v1.34.3/LogsDuration 0.07
18 TestDownloadOnly/v1.34.3/DeleteAll 0.15
19 TestDownloadOnly/v1.34.3/DeleteAlwaysSucceeds 0.13
21 TestDownloadOnly/v1.35.0-rc.1/json-events 2.58
22 TestDownloadOnly/v1.35.0-rc.1/preload-exists 0
26 TestDownloadOnly/v1.35.0-rc.1/LogsDuration 0.07
27 TestDownloadOnly/v1.35.0-rc.1/DeleteAll 0.15
28 TestDownloadOnly/v1.35.0-rc.1/DeleteAlwaysSucceeds 0.14
30 TestBinaryMirror 0.62
31 TestOffline 85.91
34 TestAddons/PreSetup/EnablingAddonOnNonExistingCluster 0.07
35 TestAddons/PreSetup/DisablingAddonOnNonExistingCluster 0.07
36 TestAddons/Setup 133.42
38 TestAddons/serial/Volcano 42.25
40 TestAddons/serial/GCPAuth/Namespaces 0.12
41 TestAddons/serial/GCPAuth/FakeCredentials 8.55
44 TestAddons/parallel/Registry 13.91
45 TestAddons/parallel/RegistryCreds 0.67
46 TestAddons/parallel/Ingress 22.2
47 TestAddons/parallel/InspektorGadget 11.68
48 TestAddons/parallel/MetricsServer 5.88
50 TestAddons/parallel/CSI 64.67
51 TestAddons/parallel/Headlamp 22.51
52 TestAddons/parallel/CloudSpanner 6.44
53 TestAddons/parallel/LocalPath 11
54 TestAddons/parallel/NvidiaDevicePlugin 6.37
55 TestAddons/parallel/Yakd 10.67
57 TestAddons/StoppedEnableDisable 13.78
58 TestCertOptions 51.31
59 TestCertExpiration 305.31
60 TestDockerFlags 51.93
61 TestForceSystemdFlag 86.03
62 TestForceSystemdEnv 43.06
67 TestErrorSpam/setup 38.98
68 TestErrorSpam/start 0.33
69 TestErrorSpam/status 0.67
70 TestErrorSpam/pause 1.28
71 TestErrorSpam/unpause 1.5
72 TestErrorSpam/stop 6.69
75 TestFunctional/serial/CopySyncFile 0
76 TestFunctional/serial/StartWithProxy 77.68
77 TestFunctional/serial/AuditLog 0
78 TestFunctional/serial/SoftStart 59.18
79 TestFunctional/serial/KubeContext 0.05
80 TestFunctional/serial/KubectlGetPods 0.07
83 TestFunctional/serial/CacheCmd/cache/add_remote 2.18
84 TestFunctional/serial/CacheCmd/cache/add_local 1.17
85 TestFunctional/serial/CacheCmd/cache/CacheDelete 0.06
86 TestFunctional/serial/CacheCmd/cache/list 0.06
87 TestFunctional/serial/CacheCmd/cache/verify_cache_inside_node 0.18
88 TestFunctional/serial/CacheCmd/cache/cache_reload 1.01
89 TestFunctional/serial/CacheCmd/cache/delete 0.11
90 TestFunctional/serial/MinikubeKubectlCmd 0.11
91 TestFunctional/serial/MinikubeKubectlCmdDirectly 0.11
92 TestFunctional/serial/ExtraConfig 52.97
93 TestFunctional/serial/ComponentHealth 0.07
94 TestFunctional/serial/LogsCmd 0.99
95 TestFunctional/serial/LogsFileCmd 0.95
96 TestFunctional/serial/InvalidService 4.26
98 TestFunctional/parallel/ConfigCmd 0.42
99 TestFunctional/parallel/DashboardCmd 14.71
100 TestFunctional/parallel/DryRun 0.24
101 TestFunctional/parallel/InternationalLanguage 0.13
102 TestFunctional/parallel/StatusCmd 0.83
106 TestFunctional/parallel/ServiceCmdConnect 31.55
107 TestFunctional/parallel/AddonsCmd 0.16
108 TestFunctional/parallel/PersistentVolumeClaim 51.42
110 TestFunctional/parallel/SSHCmd 0.35
111 TestFunctional/parallel/CpCmd 1.23
112 TestFunctional/parallel/MySQL 43.67
113 TestFunctional/parallel/FileSync 0.19
114 TestFunctional/parallel/CertSync 1.19
118 TestFunctional/parallel/NodeLabels 0.08
120 TestFunctional/parallel/NonActiveRuntimeDisabled 0.2
122 TestFunctional/parallel/License 0.41
123 TestFunctional/parallel/ProfileCmd/profile_not_create 0.39
124 TestFunctional/parallel/DockerEnv/bash 0.79
125 TestFunctional/parallel/ProfileCmd/profile_list 0.36
126 TestFunctional/parallel/Version/short 0.06
127 TestFunctional/parallel/Version/components 0.88
128 TestFunctional/parallel/ImageCommands/ImageListShort 0.2
129 TestFunctional/parallel/ImageCommands/ImageListTable 0.24
130 TestFunctional/parallel/ImageCommands/ImageListJson 0.24
131 TestFunctional/parallel/ImageCommands/ImageListYaml 0.21
132 TestFunctional/parallel/ImageCommands/ImageBuild 2.59
133 TestFunctional/parallel/ImageCommands/Setup 0.99
134 TestFunctional/parallel/ProfileCmd/profile_json_output 0.38
135 TestFunctional/parallel/MountCmd/any-port 5.99
145 TestFunctional/parallel/ImageCommands/ImageLoadDaemon 0.89
146 TestFunctional/parallel/ImageCommands/ImageReloadDaemon 0.88
147 TestFunctional/parallel/ImageCommands/ImageTagAndLoadDaemon 1.45
148 TestFunctional/parallel/ImageCommands/ImageSaveToFile 0.31
149 TestFunctional/parallel/ImageCommands/ImageRemove 0.39
150 TestFunctional/parallel/ImageCommands/ImageLoadFromFile 0.7
151 TestFunctional/parallel/ImageCommands/ImageSaveDaemon 0.9
152 TestFunctional/parallel/MountCmd/specific-port 1.56
153 TestFunctional/parallel/UpdateContextCmd/no_changes 0.08
154 TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster 0.08
155 TestFunctional/parallel/UpdateContextCmd/no_clusters 0.08
156 TestFunctional/parallel/MountCmd/VerifyCleanup 1.46
157 TestFunctional/parallel/ServiceCmd/DeployApp 26.39
158 TestFunctional/parallel/ServiceCmd/List 1.36
159 TestFunctional/parallel/ServiceCmd/JSONOutput 1.27
160 TestFunctional/parallel/ServiceCmd/HTTPS 0.33
161 TestFunctional/parallel/ServiceCmd/Format 0.26
162 TestFunctional/parallel/ServiceCmd/URL 0.34
163 TestFunctional/delete_echo-server_images 0.04
164 TestFunctional/delete_my-image_image 0.02
165 TestFunctional/delete_minikube_cached_images 0.02
169 TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/serial/CopySyncFile 0
170 TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/serial/StartWithProxy 53.05
171 TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/serial/AuditLog 0
172 TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/serial/SoftStart 54.93
173 TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/serial/KubeContext 0.04
174 TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/serial/KubectlGetPods 0.07
177 TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/serial/CacheCmd/cache/add_remote 2.08
178 TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/serial/CacheCmd/cache/add_local 1.03
179 TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/serial/CacheCmd/cache/CacheDelete 0.06
180 TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/serial/CacheCmd/cache/list 0.06
181 TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/serial/CacheCmd/cache/verify_cache_inside_node 0.18
182 TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/serial/CacheCmd/cache/cache_reload 1.19
183 TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/serial/CacheCmd/cache/delete 0.12
184 TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/serial/MinikubeKubectlCmd 0.12
185 TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/serial/MinikubeKubectlCmdDirectly 0.11
186 TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/serial/ExtraConfig 52.27
187 TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/serial/ComponentHealth 0.06
188 TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/serial/LogsCmd 0.94
189 TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/serial/LogsFileCmd 0.91
190 TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/serial/InvalidService 4.01
192 TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/ConfigCmd 0.44
193 TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/DashboardCmd 33.22
194 TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/DryRun 0.21
195 TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/InternationalLanguage 0.11
196 TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/StatusCmd 0.67
200 TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/ServiceCmdConnect 8.48
201 TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/AddonsCmd 0.19
202 TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/PersistentVolumeClaim 49.04
204 TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/SSHCmd 0.33
205 TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/CpCmd 1.15
206 TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/MySQL 42.01
207 TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/FileSync 0.15
208 TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/CertSync 1.05
212 TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/NodeLabels 0.06
214 TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/NonActiveRuntimeDisabled 0.18
216 TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/License 0.38
217 TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/ServiceCmd/DeployApp 8.26
218 TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/Version/short 0.06
219 TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/Version/components 0.51
220 TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/ImageCommands/ImageListShort 0.18
221 TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/ImageCommands/ImageListTable 0.19
222 TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/ImageCommands/ImageListJson 0.2
223 TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/ImageCommands/ImageListYaml 0.19
224 TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/ImageCommands/ImageBuild 2.71
225 TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/ImageCommands/Setup 0.34
226 TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/ImageCommands/ImageLoadDaemon 0.78
236 TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/ImageCommands/ImageReloadDaemon 0.66
237 TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/ImageCommands/ImageTagAndLoadDaemon 0.94
238 TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/ProfileCmd/profile_not_create 0.34
239 TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/ProfileCmd/profile_list 0.34
240 TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/ProfileCmd/profile_json_output 0.31
241 TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/ImageCommands/ImageSaveToFile 0.32
242 TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/MountCmd/any-port 6.03
243 TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/ImageCommands/ImageRemove 0.35
244 TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/ImageCommands/ImageLoadFromFile 0.54
245 TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/ImageCommands/ImageSaveDaemon 0.73
246 TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/DockerEnv/bash 0.7
247 TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/UpdateContextCmd/no_changes 0.07
248 TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/UpdateContextCmd/no_minikube_cluster 0.07
249 TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/UpdateContextCmd/no_clusters 0.07
250 TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/ServiceCmd/List 0.22
251 TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/ServiceCmd/JSONOutput 0.25
252 TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/ServiceCmd/HTTPS 0.25
253 TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/ServiceCmd/Format 0.29
254 TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/ServiceCmd/URL 0.25
255 TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/MountCmd/specific-port 1.32
256 TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/MountCmd/VerifyCleanup 0.98
257 TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/delete_echo-server_images 0.04
258 TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/delete_my-image_image 0.02
259 TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/delete_minikube_cached_images 0.02
263 TestMultiControlPlane/serial/StartCluster 242.31
264 TestMultiControlPlane/serial/DeployApp 4.51
265 TestMultiControlPlane/serial/PingHostFromPods 1.33
266 TestMultiControlPlane/serial/AddWorkerNode 49.03
267 TestMultiControlPlane/serial/NodeLabels 0.07
268 TestMultiControlPlane/serial/HAppyAfterClusterStart 0.67
269 TestMultiControlPlane/serial/CopyFile 10.89
270 TestMultiControlPlane/serial/StopSecondaryNode 14.67
271 TestMultiControlPlane/serial/DegradedAfterControlPlaneNodeStop 0.51
272 TestMultiControlPlane/serial/RestartSecondaryNode 21.83
273 TestMultiControlPlane/serial/HAppyAfterSecondaryNodeRestart 0.78
274 TestMultiControlPlane/serial/RestartClusterKeepsNodes 164.5
275 TestMultiControlPlane/serial/DeleteSecondaryNode 6.87
276 TestMultiControlPlane/serial/DegradedAfterSecondaryNodeDelete 0.51
277 TestMultiControlPlane/serial/StopCluster 38.07
278 TestMultiControlPlane/serial/RestartCluster 86.79
279 TestMultiControlPlane/serial/DegradedAfterClusterRestart 0.51
280 TestMultiControlPlane/serial/AddSecondaryNode 76.54
281 TestMultiControlPlane/serial/HAppyAfterSecondaryNodeAdd 0.68
284 TestImageBuild/serial/Setup 38.24
285 TestImageBuild/serial/NormalBuild 1.43
286 TestImageBuild/serial/BuildWithBuildArg 0.83
287 TestImageBuild/serial/BuildWithDockerIgnore 0.6
288 TestImageBuild/serial/BuildWithSpecifiedDockerfile 0.66
293 TestJSONOutput/start/Command 80.81
294 TestJSONOutput/start/Audit 0
296 TestJSONOutput/start/parallel/DistinctCurrentSteps 0
297 TestJSONOutput/start/parallel/IncreasingCurrentSteps 0
299 TestJSONOutput/pause/Command 0.6
300 TestJSONOutput/pause/Audit 0
302 TestJSONOutput/pause/parallel/DistinctCurrentSteps 0
303 TestJSONOutput/pause/parallel/IncreasingCurrentSteps 0
305 TestJSONOutput/unpause/Command 0.55
306 TestJSONOutput/unpause/Audit 0
308 TestJSONOutput/unpause/parallel/DistinctCurrentSteps 0
309 TestJSONOutput/unpause/parallel/IncreasingCurrentSteps 0
311 TestJSONOutput/stop/Command 13.64
312 TestJSONOutput/stop/Audit 0
314 TestJSONOutput/stop/parallel/DistinctCurrentSteps 0
315 TestJSONOutput/stop/parallel/IncreasingCurrentSteps 0
316 TestErrorJSONOutput 0.24
321 TestMainNoArgs 0.06
322 TestMinikubeProfile 82.81
325 TestMountStart/serial/StartWithMountFirst 22.82
326 TestMountStart/serial/VerifyMountFirst 0.31
327 TestMountStart/serial/StartWithMountSecond 22.32
328 TestMountStart/serial/VerifyMountSecond 0.31
329 TestMountStart/serial/DeleteFirst 0.73
330 TestMountStart/serial/VerifyMountPostDelete 0.31
331 TestMountStart/serial/Stop 1.23
332 TestMountStart/serial/RestartStopped 18.93
333 TestMountStart/serial/VerifyMountPostStop 0.3
336 TestMultiNode/serial/FreshStart2Nodes 136.17
337 TestMultiNode/serial/DeployApp2Nodes 4.06
338 TestMultiNode/serial/PingHostFrom2Pods 0.89
339 TestMultiNode/serial/AddNode 44.7
340 TestMultiNode/serial/MultiNodeLabels 0.06
341 TestMultiNode/serial/ProfileList 0.44
342 TestMultiNode/serial/CopyFile 5.79
343 TestMultiNode/serial/StopNode 2.46
344 TestMultiNode/serial/StartAfterStop 38.37
345 TestMultiNode/serial/RestartKeepsNodes 164.59
346 TestMultiNode/serial/DeleteNode 1.99
347 TestMultiNode/serial/StopMultiNode 26.71
348 TestMultiNode/serial/RestartMultiNode 84.91
349 TestMultiNode/serial/ValidateNameConflict 40.31
356 TestScheduledStopUnix 110.38
357 TestSkaffold 115.57
360 TestRunningBinaryUpgrade 372.04
362 TestKubernetesUpgrade 181.57
365 TestNoKubernetes/serial/StartNoK8sWithVersion 0.1
366 TestNoKubernetes/serial/StartWithK8s 99.59
374 TestNoKubernetes/serial/StartWithStopK8s 40.25
375 TestStoppedBinaryUpgrade/Setup 0.7
376 TestStoppedBinaryUpgrade/Upgrade 100.76
377 TestNoKubernetes/serial/Start 20.17
378 TestNoKubernetes/serial/VerifyNok8sNoK8sDownloads 0
379 TestNoKubernetes/serial/VerifyK8sNotRunning 0.16
380 TestNoKubernetes/serial/ProfileList 4.93
381 TestNoKubernetes/serial/Stop 1.23
382 TestNoKubernetes/serial/StartNoArgs 45.35
383 TestNoKubernetes/serial/VerifyK8sNotRunningSecond 0.18
384 TestStoppedBinaryUpgrade/MinikubeLogs 1.17
386 TestPause/serial/Start 85.02
387 TestPreload/Start-NoPreload-PullImage 117.68
388 TestISOImage/Setup 38.96
390 TestISOImage/Binaries/crictl 0.17
391 TestISOImage/Binaries/curl 0.2
392 TestISOImage/Binaries/docker 0.21
393 TestISOImage/Binaries/git 0.19
394 TestISOImage/Binaries/iptables 0.18
395 TestISOImage/Binaries/podman 0.18
396 TestISOImage/Binaries/rsync 0.19
397 TestISOImage/Binaries/socat 0.2
398 TestISOImage/Binaries/wget 0.18
399 TestISOImage/Binaries/VBoxControl 0.18
400 TestISOImage/Binaries/VBoxService 0.19
412 TestPause/serial/SecondStartNoReconfiguration 99.1
414 TestStartStop/group/old-k8s-version/serial/FirstStart 95.32
415 TestPreload/Restart-With-Preload-Check-User-Image 41.98
416 TestPause/serial/Pause 0.66
418 TestPause/serial/VerifyStatus 0.24
419 TestPause/serial/Unpause 0.62
421 TestStartStop/group/no-preload/serial/FirstStart 88.41
422 TestPause/serial/PauseAgain 0.8
423 TestPause/serial/DeletePaused 1.1
424 TestPause/serial/VerifyDeletedResources 0.59
426 TestStartStop/group/embed-certs/serial/FirstStart 99.05
427 TestStartStop/group/old-k8s-version/serial/DeployApp 9.33
428 TestStartStop/group/old-k8s-version/serial/EnableAddonWhileActive 2.36
429 TestStartStop/group/old-k8s-version/serial/Stop 11.83
430 TestStartStop/group/old-k8s-version/serial/EnableAddonAfterStop 0.17
431 TestStartStop/group/old-k8s-version/serial/SecondStart 39.75
432 TestStartStop/group/no-preload/serial/DeployApp 7.31
433 TestStartStop/group/no-preload/serial/EnableAddonWhileActive 1.14
434 TestStartStop/group/no-preload/serial/Stop 12.73
435 TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop 13.01
436 TestStartStop/group/embed-certs/serial/DeployApp 9.32
437 TestStartStop/group/no-preload/serial/EnableAddonAfterStop 0.15
438 TestStartStop/group/no-preload/serial/SecondStart 42.71
439 TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop 5.08
440 TestStartStop/group/embed-certs/serial/EnableAddonWhileActive 0.93
441 TestStartStop/group/embed-certs/serial/Stop 11.86
442 TestStartStop/group/old-k8s-version/serial/VerifyKubernetesImages 0.2
443 TestStartStop/group/old-k8s-version/serial/Pause 2.45
445 TestStartStop/group/default-k8s-diff-port/serial/FirstStart 88.04
446 TestStartStop/group/embed-certs/serial/EnableAddonAfterStop 0.16
447 TestStartStop/group/embed-certs/serial/SecondStart 65.22
449 TestStartStop/group/newest-cni/serial/FirstStart 75.4
450 TestStartStop/group/no-preload/serial/UserAppExistsAfterStop 6.01
451 TestStartStop/group/no-preload/serial/AddonExistsAfterStop 5.11
452 TestStartStop/group/no-preload/serial/VerifyKubernetesImages 0.22
453 TestStartStop/group/no-preload/serial/Pause 2.99
454 TestNetworkPlugins/group/auto/Start 68.49
455 TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop 6
456 TestStartStop/group/embed-certs/serial/AddonExistsAfterStop 5.08
457 TestStartStop/group/embed-certs/serial/VerifyKubernetesImages 0.22
458 TestStartStop/group/embed-certs/serial/Pause 2.76
459 TestNetworkPlugins/group/kindnet/Start 74.47
460 TestStartStop/group/newest-cni/serial/DeployApp 0
461 TestStartStop/group/newest-cni/serial/EnableAddonWhileActive 1.53
462 TestStartStop/group/default-k8s-diff-port/serial/DeployApp 9.4
463 TestStartStop/group/newest-cni/serial/Stop 12.01
464 TestStartStop/group/default-k8s-diff-port/serial/EnableAddonWhileActive 1.01
465 TestStartStop/group/default-k8s-diff-port/serial/Stop 14.88
466 TestStartStop/group/newest-cni/serial/EnableAddonAfterStop 0.17
467 TestStartStop/group/newest-cni/serial/SecondStart 40.12
468 TestStartStop/group/default-k8s-diff-port/serial/EnableAddonAfterStop 0.21
469 TestStartStop/group/default-k8s-diff-port/serial/SecondStart 60.76
470 TestNetworkPlugins/group/auto/KubeletFlags 0.17
471 TestNetworkPlugins/group/auto/NetCatPod 10.26
472 TestNetworkPlugins/group/auto/DNS 0.21
473 TestNetworkPlugins/group/auto/Localhost 0.17
474 TestNetworkPlugins/group/auto/HairPin 0.15
475 TestStartStop/group/newest-cni/serial/UserAppExistsAfterStop 0
476 TestStartStop/group/newest-cni/serial/AddonExistsAfterStop 0
477 TestStartStop/group/newest-cni/serial/VerifyKubernetesImages 0.26
478 TestStartStop/group/newest-cni/serial/Pause 3.34
479 TestNetworkPlugins/group/calico/Start 92.46
480 TestNetworkPlugins/group/custom-flannel/Start 80.94
481 TestNetworkPlugins/group/kindnet/ControllerPod 6.01
482 TestNetworkPlugins/group/kindnet/KubeletFlags 0.19
483 TestNetworkPlugins/group/kindnet/NetCatPod 14.27
484 TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop 9
485 TestNetworkPlugins/group/kindnet/DNS 0.19
486 TestNetworkPlugins/group/kindnet/Localhost 0.15
487 TestNetworkPlugins/group/kindnet/HairPin 0.16
488 TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop 5.12
489 TestStartStop/group/default-k8s-diff-port/serial/VerifyKubernetesImages 0.24
491 TestNetworkPlugins/group/false/Start 92.23
492 TestNetworkPlugins/group/custom-flannel/KubeletFlags 0.22
493 TestNetworkPlugins/group/custom-flannel/NetCatPod 12.29
494 TestNetworkPlugins/group/enable-default-cni/Start 87.94
495 TestNetworkPlugins/group/calico/ControllerPod 6.01
496 TestNetworkPlugins/group/custom-flannel/DNS 0.17
497 TestNetworkPlugins/group/custom-flannel/Localhost 0.14
498 TestNetworkPlugins/group/custom-flannel/HairPin 0.17
499 TestNetworkPlugins/group/calico/KubeletFlags 0.2
500 TestNetworkPlugins/group/calico/NetCatPod 12.28
501 TestNetworkPlugins/group/flannel/Start 64.8
502 TestNetworkPlugins/group/calico/DNS 0.21
503 TestNetworkPlugins/group/calico/Localhost 0.18
504 TestNetworkPlugins/group/calico/HairPin 0.16
505 TestNetworkPlugins/group/bridge/Start 90.88
506 TestNetworkPlugins/group/false/KubeletFlags 0.2
507 TestNetworkPlugins/group/false/NetCatPod 11.28
508 TestNetworkPlugins/group/false/DNS 0.2
509 TestNetworkPlugins/group/false/Localhost 0.15
510 TestNetworkPlugins/group/false/HairPin 0.2
511 TestNetworkPlugins/group/kubenet/Start 87.17
512 TestNetworkPlugins/group/enable-default-cni/KubeletFlags 0.22
513 TestNetworkPlugins/group/enable-default-cni/NetCatPod 11.28
514 TestNetworkPlugins/group/flannel/ControllerPod 6.01
515 TestNetworkPlugins/group/flannel/KubeletFlags 0.22
516 TestNetworkPlugins/group/flannel/NetCatPod 11.3
517 TestNetworkPlugins/group/enable-default-cni/DNS 0.16
518 TestNetworkPlugins/group/enable-default-cni/Localhost 0.14
519 TestNetworkPlugins/group/enable-default-cni/HairPin 0.15
520 TestNetworkPlugins/group/flannel/DNS 0.2
521 TestNetworkPlugins/group/flannel/Localhost 0.16
522 TestNetworkPlugins/group/flannel/HairPin 0.18
523 TestPreload/PreloadSrc/gcs 3.63
524 TestPreload/PreloadSrc/github 5.67
526 TestISOImage/PersistentMounts//data 0.2
527 TestISOImage/PersistentMounts//var/lib/docker 0.19
528 TestISOImage/PersistentMounts//var/lib/cni 0.19
529 TestISOImage/PersistentMounts//var/lib/kubelet 0.18
530 TestISOImage/PersistentMounts//var/lib/minikube 0.19
531 TestISOImage/PersistentMounts//var/lib/toolbox 0.18
532 TestISOImage/PersistentMounts//var/lib/boot2docker 0.2
533 TestPreload/PreloadSrc/gcs-cached 0.3
534 TestISOImage/VersionJSON 0.16
535 TestISOImage/eBPFSupport 0.17
536 TestNetworkPlugins/group/bridge/KubeletFlags 0.18
537 TestNetworkPlugins/group/bridge/NetCatPod 11.21
538 TestNetworkPlugins/group/bridge/DNS 0.15
539 TestNetworkPlugins/group/bridge/Localhost 0.13
540 TestNetworkPlugins/group/bridge/HairPin 0.13
541 TestNetworkPlugins/group/kubenet/KubeletFlags 0.17
542 TestNetworkPlugins/group/kubenet/NetCatPod 10.23
543 TestNetworkPlugins/group/kubenet/DNS 0.14
544 TestNetworkPlugins/group/kubenet/Localhost 0.13
545 TestNetworkPlugins/group/kubenet/HairPin 0.18
x
+
TestDownloadOnly/v1.28.0/json-events (7.56s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.28.0/json-events
aaa_download_only_test.go:80: (dbg) Run:  out/minikube-linux-amd64 start -o=json --download-only -p download-only-426573 --force --alsologtostderr --kubernetes-version=v1.28.0 --container-runtime=docker --driver=kvm2  --container-runtime=docker
aaa_download_only_test.go:80: (dbg) Done: out/minikube-linux-amd64 start -o=json --download-only -p download-only-426573 --force --alsologtostderr --kubernetes-version=v1.28.0 --container-runtime=docker --driver=kvm2  --container-runtime=docker: (7.559872187s)
--- PASS: TestDownloadOnly/v1.28.0/json-events (7.56s)

                                                
                                    
x
+
TestDownloadOnly/v1.28.0/preload-exists (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.28.0/preload-exists
I1220 01:17:36.759646   13018 preload.go:188] Checking if preload exists for k8s version v1.28.0 and runtime docker
I1220 01:17:36.759735   13018 preload.go:203] Found local preload: /home/minitest/minikube-integration/7cd9f41b7421760cf1f1eaa8725bdb975037b06d-7160/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.28.0-docker-overlay2-amd64.tar.lz4
--- PASS: TestDownloadOnly/v1.28.0/preload-exists (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.28.0/LogsDuration (0.07s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.28.0/LogsDuration
aaa_download_only_test.go:183: (dbg) Run:  out/minikube-linux-amd64 logs -p download-only-426573
aaa_download_only_test.go:183: (dbg) Non-zero exit: out/minikube-linux-amd64 logs -p download-only-426573: exit status 85 (68.96575ms)

                                                
                                                
-- stdout --
	
	==> Audit <==
	┌─────────┬─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┬──────────────────────┬──────────┬─────────┬─────────────────────┬──────────┐
	│ COMMAND │                                                                                    ARGS                                                                                     │       PROFILE        │   USER   │ VERSION │     START TIME      │ END TIME │
	├─────────┼─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┼──────────────────────┼──────────┼─────────┼─────────────────────┼──────────┤
	│ start   │ -o=json --download-only -p download-only-426573 --force --alsologtostderr --kubernetes-version=v1.28.0 --container-runtime=docker --driver=kvm2  --container-runtime=docker │ download-only-426573 │ minitest │ v1.37.0 │ 20 Dec 25 01:17 UTC │          │
	└─────────┴─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┴──────────────────────┴──────────┴─────────┴─────────────────────┴──────────┘
	
	
	==> Last Start <==
	Log file created at: 2025/12/20 01:17:29
	Running on machine: minitest-vm-9d09530a
	Binary: Built with gc go1.25.5 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1220 01:17:29.251182   13030 out.go:360] Setting OutFile to fd 1 ...
	I1220 01:17:29.251488   13030 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1220 01:17:29.251499   13030 out.go:374] Setting ErrFile to fd 2...
	I1220 01:17:29.251505   13030 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1220 01:17:29.251743   13030 root.go:338] Updating PATH: /home/minitest/minikube-integration/7cd9f41b7421760cf1f1eaa8725bdb975037b06d-7160/.minikube/bin
	W1220 01:17:29.251880   13030 root.go:314] Error reading config file at /home/minitest/minikube-integration/7cd9f41b7421760cf1f1eaa8725bdb975037b06d-7160/.minikube/config/config.json: open /home/minitest/minikube-integration/7cd9f41b7421760cf1f1eaa8725bdb975037b06d-7160/.minikube/config/config.json: no such file or directory
	I1220 01:17:29.252443   13030 out.go:368] Setting JSON to true
	I1220 01:17:29.253308   13030 start.go:133] hostinfo: {"hostname":"minitest-vm-9d09530a.c.k8s-infra-e2e-boskos-103.internal","uptime":162,"bootTime":1766193287,"procs":185,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"24.04","kernelVersion":"6.14.0-1021-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"324b1d65-3a78-4886-9ab4-95ed3c96a31c"}
	I1220 01:17:29.253404   13030 start.go:143] virtualization: kvm guest
	I1220 01:17:29.260738   13030 out.go:99] [download-only-426573] minikube v1.37.0 on Ubuntu 24.04 (kvm/amd64)
	W1220 01:17:29.260878   13030 preload.go:369] Failed to list preload files: open /home/minitest/minikube-integration/7cd9f41b7421760cf1f1eaa8725bdb975037b06d-7160/.minikube/cache/preloaded-tarball: no such file or directory
	I1220 01:17:29.260923   13030 notify.go:221] Checking for updates...
	I1220 01:17:29.262502   13030 out.go:171] MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1220 01:17:29.263756   13030 out.go:171] KUBECONFIG=/home/minitest/minikube-integration/7cd9f41b7421760cf1f1eaa8725bdb975037b06d-7160/kubeconfig
	I1220 01:17:29.264939   13030 out.go:171] MINIKUBE_HOME=/home/minitest/minikube-integration/7cd9f41b7421760cf1f1eaa8725bdb975037b06d-7160/.minikube
	I1220 01:17:29.266427   13030 out.go:171] MINIKUBE_BIN=out/minikube-linux-amd64
	W1220 01:17:29.268599   13030 out.go:336] minikube skips various validations when --force is supplied; this may lead to unexpected behavior
	I1220 01:17:29.268869   13030 driver.go:422] Setting default libvirt URI to qemu:///system
	I1220 01:17:29.980381   13030 out.go:99] Using the kvm2 driver based on user configuration
	I1220 01:17:29.980415   13030 start.go:309] selected driver: kvm2
	I1220 01:17:29.980424   13030 start.go:928] validating driver "kvm2" against <nil>
	I1220 01:17:29.980753   13030 start_flags.go:329] no existing cluster config was found, will generate one from the flags 
	I1220 01:17:29.981266   13030 start_flags.go:413] Using suggested 6144MB memory alloc based on sys=32093MB, container=0MB
	I1220 01:17:29.981436   13030 start_flags.go:977] Wait components to verify : map[apiserver:true system_pods:true]
	I1220 01:17:29.981471   13030 cni.go:84] Creating CNI manager for ""
	I1220 01:17:29.981530   13030 cni.go:158] "kvm2" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I1220 01:17:29.981545   13030 start_flags.go:338] Found "bridge CNI" CNI - setting NetworkPlugin=cni
	I1220 01:17:29.981625   13030 start.go:353] cluster config:
	{Name:download-only-426573 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765966054-22186@sha256:1c173489767e6632c410d2554f1a2272f032a423dd528157e201daadfe3c43f0 Memory:6144 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.28.0 ClusterName:download-only-426573 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local Cont
ainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.28.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1220 01:17:29.981822   13030 iso.go:125] acquiring lock: {Name:mk8cff2fd2ec419d0f1f974993910ae0235f0b9c Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1220 01:17:29.983260   13030 out.go:99] Downloading VM boot image ...
	I1220 01:17:29.983309   13030 download.go:108] Downloading: https://storage.googleapis.com/minikube-builds/iso/22186/minikube-v1.37.0-1765965980-22186-amd64.iso?checksum=file:https://storage.googleapis.com/minikube-builds/iso/22186/minikube-v1.37.0-1765965980-22186-amd64.iso.sha256 -> /home/minitest/minikube-integration/7cd9f41b7421760cf1f1eaa8725bdb975037b06d-7160/.minikube/cache/iso/amd64/minikube-v1.37.0-1765965980-22186-amd64.iso
	I1220 01:17:32.989271   13030 out.go:99] Starting "download-only-426573" primary control-plane node in "download-only-426573" cluster
	I1220 01:17:32.989305   13030 preload.go:188] Checking if preload exists for k8s version v1.28.0 and runtime docker
	I1220 01:17:33.010047   13030 preload.go:148] Found remote preload: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.28.0/preloaded-images-k8s-v18-v1.28.0-docker-overlay2-amd64.tar.lz4
	I1220 01:17:33.010078   13030 cache.go:65] Caching tarball of preloaded images
	I1220 01:17:33.010350   13030 preload.go:188] Checking if preload exists for k8s version v1.28.0 and runtime docker
	I1220 01:17:33.012191   13030 out.go:99] Downloading Kubernetes v1.28.0 preload ...
	I1220 01:17:33.012244   13030 preload.go:269] Downloading preload from https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.28.0/preloaded-images-k8s-v18-v1.28.0-docker-overlay2-amd64.tar.lz4
	I1220 01:17:33.012253   13030 preload.go:333] getting checksum for preloaded-images-k8s-v18-v1.28.0-docker-overlay2-amd64.tar.lz4 from gcs api...
	I1220 01:17:33.042308   13030 preload.go:310] Got checksum from GCS API "8a955be835827bc584bcce0658a7fcc9"
	I1220 01:17:33.042443   13030 download.go:108] Downloading: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.28.0/preloaded-images-k8s-v18-v1.28.0-docker-overlay2-amd64.tar.lz4?checksum=md5:8a955be835827bc584bcce0658a7fcc9 -> /home/minitest/minikube-integration/7cd9f41b7421760cf1f1eaa8725bdb975037b06d-7160/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.28.0-docker-overlay2-amd64.tar.lz4
	
	
	* The control-plane node download-only-426573 host does not exist
	  To start a cluster, run: "minikube start -p download-only-426573"

                                                
                                                
-- /stdout --
aaa_download_only_test.go:184: minikube logs failed with error: exit status 85
--- PASS: TestDownloadOnly/v1.28.0/LogsDuration (0.07s)

                                                
                                    
x
+
TestDownloadOnly/v1.28.0/DeleteAll (0.15s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.28.0/DeleteAll
aaa_download_only_test.go:196: (dbg) Run:  out/minikube-linux-amd64 delete --all
--- PASS: TestDownloadOnly/v1.28.0/DeleteAll (0.15s)

                                                
                                    
x
+
TestDownloadOnly/v1.28.0/DeleteAlwaysSucceeds (0.13s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.28.0/DeleteAlwaysSucceeds
aaa_download_only_test.go:207: (dbg) Run:  out/minikube-linux-amd64 delete -p download-only-426573
--- PASS: TestDownloadOnly/v1.28.0/DeleteAlwaysSucceeds (0.13s)

                                                
                                    
x
+
TestDownloadOnly/v1.34.3/json-events (2.39s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.34.3/json-events
aaa_download_only_test.go:80: (dbg) Run:  out/minikube-linux-amd64 start -o=json --download-only -p download-only-293648 --force --alsologtostderr --kubernetes-version=v1.34.3 --container-runtime=docker --driver=kvm2  --container-runtime=docker
aaa_download_only_test.go:80: (dbg) Done: out/minikube-linux-amd64 start -o=json --download-only -p download-only-293648 --force --alsologtostderr --kubernetes-version=v1.34.3 --container-runtime=docker --driver=kvm2  --container-runtime=docker: (2.391652526s)
--- PASS: TestDownloadOnly/v1.34.3/json-events (2.39s)

                                                
                                    
x
+
TestDownloadOnly/v1.34.3/preload-exists (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.34.3/preload-exists
I1220 01:17:39.504269   13018 preload.go:188] Checking if preload exists for k8s version v1.34.3 and runtime docker
I1220 01:17:39.504296   13018 preload.go:203] Found local preload: /home/minitest/minikube-integration/7cd9f41b7421760cf1f1eaa8725bdb975037b06d-7160/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.3-docker-overlay2-amd64.tar.lz4
--- PASS: TestDownloadOnly/v1.34.3/preload-exists (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.34.3/LogsDuration (0.07s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.34.3/LogsDuration
aaa_download_only_test.go:183: (dbg) Run:  out/minikube-linux-amd64 logs -p download-only-293648
aaa_download_only_test.go:183: (dbg) Non-zero exit: out/minikube-linux-amd64 logs -p download-only-293648: exit status 85 (68.172206ms)

                                                
                                                
-- stdout --
	
	==> Audit <==
	┌─────────┬─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┬──────────────────────┬──────────┬─────────┬─────────────────────┬─────────────────────┐
	│ COMMAND │                                                                                    ARGS                                                                                     │       PROFILE        │   USER   │ VERSION │     START TIME      │      END TIME       │
	├─────────┼─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┼──────────────────────┼──────────┼─────────┼─────────────────────┼─────────────────────┤
	│ start   │ -o=json --download-only -p download-only-426573 --force --alsologtostderr --kubernetes-version=v1.28.0 --container-runtime=docker --driver=kvm2  --container-runtime=docker │ download-only-426573 │ minitest │ v1.37.0 │ 20 Dec 25 01:17 UTC │                     │
	│ delete  │ --all                                                                                                                                                                       │ minikube             │ minitest │ v1.37.0 │ 20 Dec 25 01:17 UTC │ 20 Dec 25 01:17 UTC │
	│ delete  │ -p download-only-426573                                                                                                                                                     │ download-only-426573 │ minitest │ v1.37.0 │ 20 Dec 25 01:17 UTC │ 20 Dec 25 01:17 UTC │
	│ start   │ -o=json --download-only -p download-only-293648 --force --alsologtostderr --kubernetes-version=v1.34.3 --container-runtime=docker --driver=kvm2  --container-runtime=docker │ download-only-293648 │ minitest │ v1.37.0 │ 20 Dec 25 01:17 UTC │                     │
	└─────────┴─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┴──────────────────────┴──────────┴─────────┴─────────────────────┴─────────────────────┘
	
	
	==> Last Start <==
	Log file created at: 2025/12/20 01:17:37
	Running on machine: minitest-vm-9d09530a
	Binary: Built with gc go1.25.5 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1220 01:17:37.161380   13214 out.go:360] Setting OutFile to fd 1 ...
	I1220 01:17:37.161464   13214 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1220 01:17:37.161471   13214 out.go:374] Setting ErrFile to fd 2...
	I1220 01:17:37.161475   13214 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1220 01:17:37.161634   13214 root.go:338] Updating PATH: /home/minitest/minikube-integration/7cd9f41b7421760cf1f1eaa8725bdb975037b06d-7160/.minikube/bin
	I1220 01:17:37.162059   13214 out.go:368] Setting JSON to true
	I1220 01:17:37.162846   13214 start.go:133] hostinfo: {"hostname":"minitest-vm-9d09530a.c.k8s-infra-e2e-boskos-103.internal","uptime":170,"bootTime":1766193287,"procs":185,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"24.04","kernelVersion":"6.14.0-1021-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"324b1d65-3a78-4886-9ab4-95ed3c96a31c"}
	I1220 01:17:37.162926   13214 start.go:143] virtualization: kvm guest
	I1220 01:17:37.164762   13214 out.go:99] [download-only-293648] minikube v1.37.0 on Ubuntu 24.04 (kvm/amd64)
	I1220 01:17:37.164875   13214 notify.go:221] Checking for updates...
	I1220 01:17:37.166164   13214 out.go:171] MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1220 01:17:37.167475   13214 out.go:171] KUBECONFIG=/home/minitest/minikube-integration/7cd9f41b7421760cf1f1eaa8725bdb975037b06d-7160/kubeconfig
	I1220 01:17:37.168783   13214 out.go:171] MINIKUBE_HOME=/home/minitest/minikube-integration/7cd9f41b7421760cf1f1eaa8725bdb975037b06d-7160/.minikube
	I1220 01:17:37.169950   13214 out.go:171] MINIKUBE_BIN=out/minikube-linux-amd64
	
	
	* The control-plane node download-only-293648 host does not exist
	  To start a cluster, run: "minikube start -p download-only-293648"

                                                
                                                
-- /stdout --
aaa_download_only_test.go:184: minikube logs failed with error: exit status 85
--- PASS: TestDownloadOnly/v1.34.3/LogsDuration (0.07s)

                                                
                                    
x
+
TestDownloadOnly/v1.34.3/DeleteAll (0.15s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.34.3/DeleteAll
aaa_download_only_test.go:196: (dbg) Run:  out/minikube-linux-amd64 delete --all
--- PASS: TestDownloadOnly/v1.34.3/DeleteAll (0.15s)

                                                
                                    
x
+
TestDownloadOnly/v1.34.3/DeleteAlwaysSucceeds (0.13s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.34.3/DeleteAlwaysSucceeds
aaa_download_only_test.go:207: (dbg) Run:  out/minikube-linux-amd64 delete -p download-only-293648
--- PASS: TestDownloadOnly/v1.34.3/DeleteAlwaysSucceeds (0.13s)

                                                
                                    
x
+
TestDownloadOnly/v1.35.0-rc.1/json-events (2.58s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.35.0-rc.1/json-events
aaa_download_only_test.go:80: (dbg) Run:  out/minikube-linux-amd64 start -o=json --download-only -p download-only-291885 --force --alsologtostderr --kubernetes-version=v1.35.0-rc.1 --container-runtime=docker --driver=kvm2  --container-runtime=docker
aaa_download_only_test.go:80: (dbg) Done: out/minikube-linux-amd64 start -o=json --download-only -p download-only-291885 --force --alsologtostderr --kubernetes-version=v1.35.0-rc.1 --container-runtime=docker --driver=kvm2  --container-runtime=docker: (2.58403577s)
--- PASS: TestDownloadOnly/v1.35.0-rc.1/json-events (2.58s)

                                                
                                    
x
+
TestDownloadOnly/v1.35.0-rc.1/preload-exists (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.35.0-rc.1/preload-exists
I1220 01:17:42.444587   13018 preload.go:188] Checking if preload exists for k8s version v1.35.0-rc.1 and runtime docker
I1220 01:17:42.444620   13018 preload.go:203] Found local preload: /home/minitest/minikube-integration/7cd9f41b7421760cf1f1eaa8725bdb975037b06d-7160/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.35.0-rc.1-docker-overlay2-amd64.tar.lz4
--- PASS: TestDownloadOnly/v1.35.0-rc.1/preload-exists (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.35.0-rc.1/LogsDuration (0.07s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.35.0-rc.1/LogsDuration
aaa_download_only_test.go:183: (dbg) Run:  out/minikube-linux-amd64 logs -p download-only-291885
aaa_download_only_test.go:183: (dbg) Non-zero exit: out/minikube-linux-amd64 logs -p download-only-291885: exit status 85 (67.113629ms)

                                                
                                                
-- stdout --
	
	==> Audit <==
	┌─────────┬──────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┬──────────────────────┬──────────┬─────────┬─────────────────────┬─────────────────────┐
	│ COMMAND │                                                                                       ARGS                                                                                       │       PROFILE        │   USER   │ VERSION │     START TIME      │      END TIME       │
	├─────────┼──────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┼──────────────────────┼──────────┼─────────┼─────────────────────┼─────────────────────┤
	│ start   │ -o=json --download-only -p download-only-426573 --force --alsologtostderr --kubernetes-version=v1.28.0 --container-runtime=docker --driver=kvm2  --container-runtime=docker      │ download-only-426573 │ minitest │ v1.37.0 │ 20 Dec 25 01:17 UTC │                     │
	│ delete  │ --all                                                                                                                                                                            │ minikube             │ minitest │ v1.37.0 │ 20 Dec 25 01:17 UTC │ 20 Dec 25 01:17 UTC │
	│ delete  │ -p download-only-426573                                                                                                                                                          │ download-only-426573 │ minitest │ v1.37.0 │ 20 Dec 25 01:17 UTC │ 20 Dec 25 01:17 UTC │
	│ start   │ -o=json --download-only -p download-only-293648 --force --alsologtostderr --kubernetes-version=v1.34.3 --container-runtime=docker --driver=kvm2  --container-runtime=docker      │ download-only-293648 │ minitest │ v1.37.0 │ 20 Dec 25 01:17 UTC │                     │
	│ delete  │ --all                                                                                                                                                                            │ minikube             │ minitest │ v1.37.0 │ 20 Dec 25 01:17 UTC │ 20 Dec 25 01:17 UTC │
	│ delete  │ -p download-only-293648                                                                                                                                                          │ download-only-293648 │ minitest │ v1.37.0 │ 20 Dec 25 01:17 UTC │ 20 Dec 25 01:17 UTC │
	│ start   │ -o=json --download-only -p download-only-291885 --force --alsologtostderr --kubernetes-version=v1.35.0-rc.1 --container-runtime=docker --driver=kvm2  --container-runtime=docker │ download-only-291885 │ minitest │ v1.37.0 │ 20 Dec 25 01:17 UTC │                     │
	└─────────┴──────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┴──────────────────────┴──────────┴─────────┴─────────────────────┴─────────────────────┘
	
	
	==> Last Start <==
	Log file created at: 2025/12/20 01:17:39
	Running on machine: minitest-vm-9d09530a
	Binary: Built with gc go1.25.5 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1220 01:17:39.908688   13376 out.go:360] Setting OutFile to fd 1 ...
	I1220 01:17:39.908914   13376 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1220 01:17:39.908923   13376 out.go:374] Setting ErrFile to fd 2...
	I1220 01:17:39.908927   13376 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1220 01:17:39.909101   13376 root.go:338] Updating PATH: /home/minitest/minikube-integration/7cd9f41b7421760cf1f1eaa8725bdb975037b06d-7160/.minikube/bin
	I1220 01:17:39.909550   13376 out.go:368] Setting JSON to true
	I1220 01:17:39.910307   13376 start.go:133] hostinfo: {"hostname":"minitest-vm-9d09530a.c.k8s-infra-e2e-boskos-103.internal","uptime":173,"bootTime":1766193287,"procs":185,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"24.04","kernelVersion":"6.14.0-1021-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"324b1d65-3a78-4886-9ab4-95ed3c96a31c"}
	I1220 01:17:39.910397   13376 start.go:143] virtualization: kvm guest
	I1220 01:17:39.912048   13376 out.go:99] [download-only-291885] minikube v1.37.0 on Ubuntu 24.04 (kvm/amd64)
	I1220 01:17:39.912190   13376 notify.go:221] Checking for updates...
	I1220 01:17:39.913321   13376 out.go:171] MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1220 01:17:39.914554   13376 out.go:171] KUBECONFIG=/home/minitest/minikube-integration/7cd9f41b7421760cf1f1eaa8725bdb975037b06d-7160/kubeconfig
	I1220 01:17:39.915651   13376 out.go:171] MINIKUBE_HOME=/home/minitest/minikube-integration/7cd9f41b7421760cf1f1eaa8725bdb975037b06d-7160/.minikube
	I1220 01:17:39.916690   13376 out.go:171] MINIKUBE_BIN=out/minikube-linux-amd64
	
	
	* The control-plane node download-only-291885 host does not exist
	  To start a cluster, run: "minikube start -p download-only-291885"

                                                
                                                
-- /stdout --
aaa_download_only_test.go:184: minikube logs failed with error: exit status 85
--- PASS: TestDownloadOnly/v1.35.0-rc.1/LogsDuration (0.07s)

                                                
                                    
x
+
TestDownloadOnly/v1.35.0-rc.1/DeleteAll (0.15s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.35.0-rc.1/DeleteAll
aaa_download_only_test.go:196: (dbg) Run:  out/minikube-linux-amd64 delete --all
--- PASS: TestDownloadOnly/v1.35.0-rc.1/DeleteAll (0.15s)

                                                
                                    
x
+
TestDownloadOnly/v1.35.0-rc.1/DeleteAlwaysSucceeds (0.14s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.35.0-rc.1/DeleteAlwaysSucceeds
aaa_download_only_test.go:207: (dbg) Run:  out/minikube-linux-amd64 delete -p download-only-291885
--- PASS: TestDownloadOnly/v1.35.0-rc.1/DeleteAlwaysSucceeds (0.14s)

                                                
                                    
x
+
TestBinaryMirror (0.62s)

                                                
                                                
=== RUN   TestBinaryMirror
I1220 01:17:43.197583   13018 binary.go:80] Not caching binary, using https://dl.k8s.io/release/v1.34.3/bin/linux/amd64/kubectl?checksum=file:https://dl.k8s.io/release/v1.34.3/bin/linux/amd64/kubectl.sha256
aaa_download_only_test.go:309: (dbg) Run:  out/minikube-linux-amd64 start --download-only -p binary-mirror-900084 --alsologtostderr --binary-mirror http://127.0.0.1:44331 --driver=kvm2  --container-runtime=docker
helpers_test.go:176: Cleaning up "binary-mirror-900084" profile ...
helpers_test.go:179: (dbg) Run:  out/minikube-linux-amd64 delete -p binary-mirror-900084
--- PASS: TestBinaryMirror (0.62s)

                                                
                                    
x
+
TestOffline (85.91s)

                                                
                                                
=== RUN   TestOffline
=== PAUSE TestOffline

                                                
                                                

                                                
                                                
=== CONT  TestOffline
aab_offline_test.go:55: (dbg) Run:  out/minikube-linux-amd64 start -p offline-docker-728343 --alsologtostderr -v=1 --memory=3072 --wait=true --driver=kvm2  --container-runtime=docker
aab_offline_test.go:55: (dbg) Done: out/minikube-linux-amd64 start -p offline-docker-728343 --alsologtostderr -v=1 --memory=3072 --wait=true --driver=kvm2  --container-runtime=docker: (1m24.97586316s)
helpers_test.go:176: Cleaning up "offline-docker-728343" profile ...
helpers_test.go:179: (dbg) Run:  out/minikube-linux-amd64 delete -p offline-docker-728343
--- PASS: TestOffline (85.91s)

                                                
                                    
x
+
TestAddons/PreSetup/EnablingAddonOnNonExistingCluster (0.07s)

                                                
                                                
=== RUN   TestAddons/PreSetup/EnablingAddonOnNonExistingCluster
=== PAUSE TestAddons/PreSetup/EnablingAddonOnNonExistingCluster

                                                
                                                

                                                
                                                
=== CONT  TestAddons/PreSetup/EnablingAddonOnNonExistingCluster
addons_test.go:1002: (dbg) Run:  out/minikube-linux-amd64 addons enable dashboard -p addons-616728
addons_test.go:1002: (dbg) Non-zero exit: out/minikube-linux-amd64 addons enable dashboard -p addons-616728: exit status 85 (66.422371ms)

                                                
                                                
-- stdout --
	* Profile "addons-616728" not found. Run "minikube profile list" to view all profiles.
	  To start a cluster, run: "minikube start -p addons-616728"

                                                
                                                
-- /stdout --
--- PASS: TestAddons/PreSetup/EnablingAddonOnNonExistingCluster (0.07s)

                                                
                                    
x
+
TestAddons/PreSetup/DisablingAddonOnNonExistingCluster (0.07s)

                                                
                                                
=== RUN   TestAddons/PreSetup/DisablingAddonOnNonExistingCluster
=== PAUSE TestAddons/PreSetup/DisablingAddonOnNonExistingCluster

                                                
                                                

                                                
                                                
=== CONT  TestAddons/PreSetup/DisablingAddonOnNonExistingCluster
addons_test.go:1013: (dbg) Run:  out/minikube-linux-amd64 addons disable dashboard -p addons-616728
addons_test.go:1013: (dbg) Non-zero exit: out/minikube-linux-amd64 addons disable dashboard -p addons-616728: exit status 85 (65.821735ms)

                                                
                                                
-- stdout --
	* Profile "addons-616728" not found. Run "minikube profile list" to view all profiles.
	  To start a cluster, run: "minikube start -p addons-616728"

                                                
                                                
-- /stdout --
--- PASS: TestAddons/PreSetup/DisablingAddonOnNonExistingCluster (0.07s)

                                                
                                    
x
+
TestAddons/Setup (133.42s)

                                                
                                                
=== RUN   TestAddons/Setup
addons_test.go:110: (dbg) Run:  out/minikube-linux-amd64 start -p addons-616728 --wait=true --memory=4096 --alsologtostderr --addons=registry --addons=registry-creds --addons=metrics-server --addons=volumesnapshots --addons=csi-hostpath-driver --addons=gcp-auth --addons=cloud-spanner --addons=inspektor-gadget --addons=nvidia-device-plugin --addons=yakd --addons=volcano --addons=amd-gpu-device-plugin --driver=kvm2  --container-runtime=docker --addons=ingress --addons=ingress-dns --addons=storage-provisioner-rancher
addons_test.go:110: (dbg) Done: out/minikube-linux-amd64 start -p addons-616728 --wait=true --memory=4096 --alsologtostderr --addons=registry --addons=registry-creds --addons=metrics-server --addons=volumesnapshots --addons=csi-hostpath-driver --addons=gcp-auth --addons=cloud-spanner --addons=inspektor-gadget --addons=nvidia-device-plugin --addons=yakd --addons=volcano --addons=amd-gpu-device-plugin --driver=kvm2  --container-runtime=docker --addons=ingress --addons=ingress-dns --addons=storage-provisioner-rancher: (2m13.421664225s)
--- PASS: TestAddons/Setup (133.42s)

                                                
                                    
x
+
TestAddons/serial/Volcano (42.25s)

                                                
                                                
=== RUN   TestAddons/serial/Volcano
addons_test.go:886: volcano-controller stabilized in 30.614883ms
addons_test.go:878: volcano-admission stabilized in 31.798046ms
addons_test.go:870: volcano-scheduler stabilized in 31.869983ms
addons_test.go:892: (dbg) TestAddons/serial/Volcano: waiting 6m0s for pods matching "app=volcano-scheduler" in namespace "volcano-system" ...
helpers_test.go:353: "volcano-scheduler-76c996c8bf-vr2wp" [b4fafda9-720a-4fec-b6f4-3f71fab2c2b1] Running
addons_test.go:892: (dbg) TestAddons/serial/Volcano: app=volcano-scheduler healthy within 5.005017502s
addons_test.go:896: (dbg) TestAddons/serial/Volcano: waiting 6m0s for pods matching "app=volcano-admission" in namespace "volcano-system" ...
helpers_test.go:353: "volcano-admission-6c447bd768-grn8b" [9a440b08-dc1d-4a97-a0e2-5034634811b9] Running
addons_test.go:896: (dbg) TestAddons/serial/Volcano: app=volcano-admission healthy within 5.00447569s
addons_test.go:900: (dbg) TestAddons/serial/Volcano: waiting 6m0s for pods matching "app=volcano-controller" in namespace "volcano-system" ...
helpers_test.go:353: "volcano-controllers-6fd4f85cb8-rc2sw" [fe3e9640-6e82-4b6c-9ecc-be50b6372945] Running
addons_test.go:900: (dbg) TestAddons/serial/Volcano: app=volcano-controller healthy within 5.005740661s
addons_test.go:905: (dbg) Run:  kubectl --context addons-616728 delete -n volcano-system job volcano-admission-init
addons_test.go:911: (dbg) Run:  kubectl --context addons-616728 create -f testdata/vcjob.yaml
addons_test.go:919: (dbg) Run:  kubectl --context addons-616728 get vcjob -n my-volcano
addons_test.go:937: (dbg) TestAddons/serial/Volcano: waiting 3m0s for pods matching "volcano.sh/job-name=test-job" in namespace "my-volcano" ...
helpers_test.go:353: "test-job-nginx-0" [16e9f5b8-c9c6-4f7d-b03a-3ae50c5f1764] Pending
helpers_test.go:353: "test-job-nginx-0" [16e9f5b8-c9c6-4f7d-b03a-3ae50c5f1764] Pending / Ready:ContainersNotReady (containers with unready status: [nginx]) / ContainersReady:ContainersNotReady (containers with unready status: [nginx])
helpers_test.go:353: "test-job-nginx-0" [16e9f5b8-c9c6-4f7d-b03a-3ae50c5f1764] Running
addons_test.go:937: (dbg) TestAddons/serial/Volcano: volcano.sh/job-name=test-job healthy within 15.004707142s
addons_test.go:1055: (dbg) Run:  out/minikube-linux-amd64 -p addons-616728 addons disable volcano --alsologtostderr -v=1
addons_test.go:1055: (dbg) Done: out/minikube-linux-amd64 -p addons-616728 addons disable volcano --alsologtostderr -v=1: (11.845000338s)
--- PASS: TestAddons/serial/Volcano (42.25s)

                                                
                                    
x
+
TestAddons/serial/GCPAuth/Namespaces (0.12s)

                                                
                                                
=== RUN   TestAddons/serial/GCPAuth/Namespaces
addons_test.go:632: (dbg) Run:  kubectl --context addons-616728 create ns new-namespace
addons_test.go:646: (dbg) Run:  kubectl --context addons-616728 get secret gcp-auth -n new-namespace
--- PASS: TestAddons/serial/GCPAuth/Namespaces (0.12s)

                                                
                                    
x
+
TestAddons/serial/GCPAuth/FakeCredentials (8.55s)

                                                
                                                
=== RUN   TestAddons/serial/GCPAuth/FakeCredentials
addons_test.go:677: (dbg) Run:  kubectl --context addons-616728 create -f testdata/busybox.yaml
addons_test.go:684: (dbg) Run:  kubectl --context addons-616728 create sa gcp-auth-test
addons_test.go:690: (dbg) TestAddons/serial/GCPAuth/FakeCredentials: waiting 8m0s for pods matching "integration-test=busybox" in namespace "default" ...
helpers_test.go:353: "busybox" [e4a49058-6088-4faf-820f-3f11ba9047fe] Pending / Ready:ContainersNotReady (containers with unready status: [busybox]) / ContainersReady:ContainersNotReady (containers with unready status: [busybox])
helpers_test.go:353: "busybox" [e4a49058-6088-4faf-820f-3f11ba9047fe] Running
addons_test.go:690: (dbg) TestAddons/serial/GCPAuth/FakeCredentials: integration-test=busybox healthy within 8.003565463s
addons_test.go:696: (dbg) Run:  kubectl --context addons-616728 exec busybox -- /bin/sh -c "printenv GOOGLE_APPLICATION_CREDENTIALS"
addons_test.go:708: (dbg) Run:  kubectl --context addons-616728 describe sa gcp-auth-test
addons_test.go:746: (dbg) Run:  kubectl --context addons-616728 exec busybox -- /bin/sh -c "printenv GOOGLE_CLOUD_PROJECT"
--- PASS: TestAddons/serial/GCPAuth/FakeCredentials (8.55s)

                                                
                                    
x
+
TestAddons/parallel/Registry (13.91s)

                                                
                                                
=== RUN   TestAddons/parallel/Registry
=== PAUSE TestAddons/parallel/Registry

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/Registry
addons_test.go:384: registry stabilized in 6.914284ms
addons_test.go:386: (dbg) TestAddons/parallel/Registry: waiting 6m0s for pods matching "actual-registry=true" in namespace "kube-system" ...
helpers_test.go:353: "registry-6b586f9694-8rlcc" [f39494c5-3ea5-409f-919a-1910c4c18f75] Running
addons_test.go:386: (dbg) TestAddons/parallel/Registry: actual-registry=true healthy within 5.006691119s
addons_test.go:389: (dbg) TestAddons/parallel/Registry: waiting 10m0s for pods matching "registry-proxy=true" in namespace "kube-system" ...
helpers_test.go:353: "registry-proxy-h75f2" [38d950f4-d1fd-4f4c-9dc8-f29289d44afd] Running
addons_test.go:389: (dbg) TestAddons/parallel/Registry: registry-proxy=true healthy within 5.005700623s
addons_test.go:394: (dbg) Run:  kubectl --context addons-616728 delete po -l run=registry-test --now
addons_test.go:399: (dbg) Run:  kubectl --context addons-616728 run --rm registry-test --restart=Never --image=gcr.io/k8s-minikube/busybox -it -- sh -c "wget --spider -S http://registry.kube-system.svc.cluster.local"
addons_test.go:399: (dbg) Done: kubectl --context addons-616728 run --rm registry-test --restart=Never --image=gcr.io/k8s-minikube/busybox -it -- sh -c "wget --spider -S http://registry.kube-system.svc.cluster.local": (3.150903631s)
addons_test.go:413: (dbg) Run:  out/minikube-linux-amd64 -p addons-616728 ip
2025/12/20 01:21:10 [DEBUG] GET http://192.168.39.34:5000
addons_test.go:1055: (dbg) Run:  out/minikube-linux-amd64 -p addons-616728 addons disable registry --alsologtostderr -v=1
--- PASS: TestAddons/parallel/Registry (13.91s)

                                                
                                    
x
+
TestAddons/parallel/RegistryCreds (0.67s)

                                                
                                                
=== RUN   TestAddons/parallel/RegistryCreds
=== PAUSE TestAddons/parallel/RegistryCreds

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/RegistryCreds
addons_test.go:325: registry-creds stabilized in 5.126998ms
addons_test.go:327: (dbg) Run:  out/minikube-linux-amd64 addons configure registry-creds -f ./testdata/addons_testconfig.json -p addons-616728
addons_test.go:334: (dbg) Run:  kubectl --context addons-616728 -n kube-system get secret -o yaml
addons_test.go:1055: (dbg) Run:  out/minikube-linux-amd64 -p addons-616728 addons disable registry-creds --alsologtostderr -v=1
--- PASS: TestAddons/parallel/RegistryCreds (0.67s)

                                                
                                    
x
+
TestAddons/parallel/Ingress (22.2s)

                                                
                                                
=== RUN   TestAddons/parallel/Ingress
=== PAUSE TestAddons/parallel/Ingress

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/Ingress
addons_test.go:211: (dbg) Run:  kubectl --context addons-616728 wait --for=condition=ready --namespace=ingress-nginx pod --selector=app.kubernetes.io/component=controller --timeout=90s
addons_test.go:236: (dbg) Run:  kubectl --context addons-616728 replace --force -f testdata/nginx-ingress-v1.yaml
addons_test.go:249: (dbg) Run:  kubectl --context addons-616728 replace --force -f testdata/nginx-pod-svc.yaml
addons_test.go:254: (dbg) TestAddons/parallel/Ingress: waiting 8m0s for pods matching "run=nginx" in namespace "default" ...
helpers_test.go:353: "nginx" [5762942a-78a4-4f15-8ffe-41146f168ba8] Pending / Ready:ContainersNotReady (containers with unready status: [nginx]) / ContainersReady:ContainersNotReady (containers with unready status: [nginx])
helpers_test.go:353: "nginx" [5762942a-78a4-4f15-8ffe-41146f168ba8] Running
addons_test.go:254: (dbg) TestAddons/parallel/Ingress: run=nginx healthy within 11.003316008s
I1220 01:21:08.820780   13018 kapi.go:150] Service nginx in namespace default found.
addons_test.go:266: (dbg) Run:  out/minikube-linux-amd64 -p addons-616728 ssh "curl -s http://127.0.0.1/ -H 'Host: nginx.example.com'"
addons_test.go:290: (dbg) Run:  kubectl --context addons-616728 replace --force -f testdata/ingress-dns-example-v1.yaml
addons_test.go:295: (dbg) Run:  out/minikube-linux-amd64 -p addons-616728 ip
addons_test.go:301: (dbg) Run:  nslookup hello-john.test 192.168.39.34
addons_test.go:1055: (dbg) Run:  out/minikube-linux-amd64 -p addons-616728 addons disable ingress-dns --alsologtostderr -v=1
addons_test.go:1055: (dbg) Done: out/minikube-linux-amd64 -p addons-616728 addons disable ingress-dns --alsologtostderr -v=1: (2.181340098s)
addons_test.go:1055: (dbg) Run:  out/minikube-linux-amd64 -p addons-616728 addons disable ingress --alsologtostderr -v=1
addons_test.go:1055: (dbg) Done: out/minikube-linux-amd64 -p addons-616728 addons disable ingress --alsologtostderr -v=1: (7.765299418s)
--- PASS: TestAddons/parallel/Ingress (22.20s)

                                                
                                    
x
+
TestAddons/parallel/InspektorGadget (11.68s)

                                                
                                                
=== RUN   TestAddons/parallel/InspektorGadget
=== PAUSE TestAddons/parallel/InspektorGadget

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/InspektorGadget
addons_test.go:825: (dbg) TestAddons/parallel/InspektorGadget: waiting 8m0s for pods matching "k8s-app=gadget" in namespace "gadget" ...
helpers_test.go:353: "gadget-rmmrq" [8c72ff0b-376b-4c4a-b42f-e6342f755d5d] Running
addons_test.go:825: (dbg) TestAddons/parallel/InspektorGadget: k8s-app=gadget healthy within 6.003401989s
addons_test.go:1055: (dbg) Run:  out/minikube-linux-amd64 -p addons-616728 addons disable inspektor-gadget --alsologtostderr -v=1
addons_test.go:1055: (dbg) Done: out/minikube-linux-amd64 -p addons-616728 addons disable inspektor-gadget --alsologtostderr -v=1: (5.680336344s)
--- PASS: TestAddons/parallel/InspektorGadget (11.68s)

                                                
                                    
x
+
TestAddons/parallel/MetricsServer (5.88s)

                                                
                                                
=== RUN   TestAddons/parallel/MetricsServer
=== PAUSE TestAddons/parallel/MetricsServer

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/MetricsServer
addons_test.go:457: metrics-server stabilized in 7.538221ms
addons_test.go:459: (dbg) TestAddons/parallel/MetricsServer: waiting 6m0s for pods matching "k8s-app=metrics-server" in namespace "kube-system" ...
helpers_test.go:353: "metrics-server-85b7d694d7-xbf8h" [c95c3e50-c084-45bb-9015-b36b13985693] Running
addons_test.go:459: (dbg) TestAddons/parallel/MetricsServer: k8s-app=metrics-server healthy within 5.006404353s
addons_test.go:465: (dbg) Run:  kubectl --context addons-616728 top pods -n kube-system
addons_test.go:1055: (dbg) Run:  out/minikube-linux-amd64 -p addons-616728 addons disable metrics-server --alsologtostderr -v=1
--- PASS: TestAddons/parallel/MetricsServer (5.88s)

                                                
                                    
x
+
TestAddons/parallel/CSI (64.67s)

                                                
                                                
=== RUN   TestAddons/parallel/CSI
=== PAUSE TestAddons/parallel/CSI

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/CSI
I1220 01:21:03.135131   13018 kapi.go:75] Waiting for pod with label "kubernetes.io/minikube-addons=csi-hostpath-driver" in ns "kube-system" ...
I1220 01:21:03.141579   13018 kapi.go:86] Found 3 Pods for label selector kubernetes.io/minikube-addons=csi-hostpath-driver
I1220 01:21:03.141601   13018 kapi.go:107] duration metric: took 6.482899ms to wait for kubernetes.io/minikube-addons=csi-hostpath-driver ...
addons_test.go:551: csi-hostpath-driver pods stabilized in 6.491825ms
addons_test.go:554: (dbg) Run:  kubectl --context addons-616728 create -f testdata/csi-hostpath-driver/pvc.yaml
addons_test.go:559: (dbg) TestAddons/parallel/CSI: waiting 6m0s for pvc "hpvc" in namespace "default" ...
helpers_test.go:403: (dbg) Run:  kubectl --context addons-616728 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:403: (dbg) Run:  kubectl --context addons-616728 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:403: (dbg) Run:  kubectl --context addons-616728 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:403: (dbg) Run:  kubectl --context addons-616728 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:403: (dbg) Run:  kubectl --context addons-616728 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:403: (dbg) Run:  kubectl --context addons-616728 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:403: (dbg) Run:  kubectl --context addons-616728 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:403: (dbg) Run:  kubectl --context addons-616728 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:403: (dbg) Run:  kubectl --context addons-616728 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:403: (dbg) Run:  kubectl --context addons-616728 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:403: (dbg) Run:  kubectl --context addons-616728 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:403: (dbg) Run:  kubectl --context addons-616728 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:403: (dbg) Run:  kubectl --context addons-616728 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:403: (dbg) Run:  kubectl --context addons-616728 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:403: (dbg) Run:  kubectl --context addons-616728 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:403: (dbg) Run:  kubectl --context addons-616728 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:403: (dbg) Run:  kubectl --context addons-616728 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:403: (dbg) Run:  kubectl --context addons-616728 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:403: (dbg) Run:  kubectl --context addons-616728 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:403: (dbg) Run:  kubectl --context addons-616728 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:403: (dbg) Run:  kubectl --context addons-616728 get pvc hpvc -o jsonpath={.status.phase} -n default
addons_test.go:564: (dbg) Run:  kubectl --context addons-616728 create -f testdata/csi-hostpath-driver/pv-pod.yaml
addons_test.go:569: (dbg) TestAddons/parallel/CSI: waiting 6m0s for pods matching "app=task-pv-pod" in namespace "default" ...
helpers_test.go:353: "task-pv-pod" [e7acdc00-7457-4fa5-8811-1b0a986bf51c] Pending
helpers_test.go:353: "task-pv-pod" [e7acdc00-7457-4fa5-8811-1b0a986bf51c] Pending / Ready:ContainersNotReady (containers with unready status: [task-pv-container]) / ContainersReady:ContainersNotReady (containers with unready status: [task-pv-container])
helpers_test.go:353: "task-pv-pod" [e7acdc00-7457-4fa5-8811-1b0a986bf51c] Running
addons_test.go:569: (dbg) TestAddons/parallel/CSI: app=task-pv-pod healthy within 7.00418119s
addons_test.go:574: (dbg) Run:  kubectl --context addons-616728 create -f testdata/csi-hostpath-driver/snapshot.yaml
addons_test.go:579: (dbg) TestAddons/parallel/CSI: waiting 6m0s for volume snapshot "new-snapshot-demo" in namespace "default" ...
helpers_test.go:428: (dbg) Run:  kubectl --context addons-616728 get volumesnapshot new-snapshot-demo -o jsonpath={.status.readyToUse} -n default
helpers_test.go:436: TestAddons/parallel/CSI: WARNING: volume snapshot get for "default" "new-snapshot-demo" returned: 
helpers_test.go:428: (dbg) Run:  kubectl --context addons-616728 get volumesnapshot new-snapshot-demo -o jsonpath={.status.readyToUse} -n default
addons_test.go:584: (dbg) Run:  kubectl --context addons-616728 delete pod task-pv-pod
addons_test.go:584: (dbg) Done: kubectl --context addons-616728 delete pod task-pv-pod: (1.008154691s)
addons_test.go:590: (dbg) Run:  kubectl --context addons-616728 delete pvc hpvc
addons_test.go:596: (dbg) Run:  kubectl --context addons-616728 create -f testdata/csi-hostpath-driver/pvc-restore.yaml
addons_test.go:601: (dbg) TestAddons/parallel/CSI: waiting 6m0s for pvc "hpvc-restore" in namespace "default" ...
helpers_test.go:403: (dbg) Run:  kubectl --context addons-616728 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:403: (dbg) Run:  kubectl --context addons-616728 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:403: (dbg) Run:  kubectl --context addons-616728 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:403: (dbg) Run:  kubectl --context addons-616728 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:403: (dbg) Run:  kubectl --context addons-616728 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:403: (dbg) Run:  kubectl --context addons-616728 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:403: (dbg) Run:  kubectl --context addons-616728 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:403: (dbg) Run:  kubectl --context addons-616728 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:403: (dbg) Run:  kubectl --context addons-616728 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:403: (dbg) Run:  kubectl --context addons-616728 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:403: (dbg) Run:  kubectl --context addons-616728 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:403: (dbg) Run:  kubectl --context addons-616728 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:403: (dbg) Run:  kubectl --context addons-616728 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:403: (dbg) Run:  kubectl --context addons-616728 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:403: (dbg) Run:  kubectl --context addons-616728 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:403: (dbg) Run:  kubectl --context addons-616728 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:403: (dbg) Run:  kubectl --context addons-616728 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:403: (dbg) Run:  kubectl --context addons-616728 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:403: (dbg) Run:  kubectl --context addons-616728 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:403: (dbg) Run:  kubectl --context addons-616728 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
addons_test.go:606: (dbg) Run:  kubectl --context addons-616728 create -f testdata/csi-hostpath-driver/pv-pod-restore.yaml
addons_test.go:611: (dbg) TestAddons/parallel/CSI: waiting 6m0s for pods matching "app=task-pv-pod-restore" in namespace "default" ...
helpers_test.go:353: "task-pv-pod-restore" [e6113cac-3cb4-4e29-9055-cbbc909606be] Pending
helpers_test.go:353: "task-pv-pod-restore" [e6113cac-3cb4-4e29-9055-cbbc909606be] Pending / Ready:ContainersNotReady (containers with unready status: [task-pv-container]) / ContainersReady:ContainersNotReady (containers with unready status: [task-pv-container])
helpers_test.go:353: "task-pv-pod-restore" [e6113cac-3cb4-4e29-9055-cbbc909606be] Running
addons_test.go:611: (dbg) TestAddons/parallel/CSI: app=task-pv-pod-restore healthy within 7.003723234s
addons_test.go:616: (dbg) Run:  kubectl --context addons-616728 delete pod task-pv-pod-restore
addons_test.go:620: (dbg) Run:  kubectl --context addons-616728 delete pvc hpvc-restore
addons_test.go:624: (dbg) Run:  kubectl --context addons-616728 delete volumesnapshot new-snapshot-demo
addons_test.go:1055: (dbg) Run:  out/minikube-linux-amd64 -p addons-616728 addons disable volumesnapshots --alsologtostderr -v=1
addons_test.go:1055: (dbg) Run:  out/minikube-linux-amd64 -p addons-616728 addons disable csi-hostpath-driver --alsologtostderr -v=1
addons_test.go:1055: (dbg) Done: out/minikube-linux-amd64 -p addons-616728 addons disable csi-hostpath-driver --alsologtostderr -v=1: (6.730579427s)
--- PASS: TestAddons/parallel/CSI (64.67s)

                                                
                                    
x
+
TestAddons/parallel/Headlamp (22.51s)

                                                
                                                
=== RUN   TestAddons/parallel/Headlamp
=== PAUSE TestAddons/parallel/Headlamp

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/Headlamp
addons_test.go:810: (dbg) Run:  out/minikube-linux-amd64 addons enable headlamp -p addons-616728 --alsologtostderr -v=1
addons_test.go:815: (dbg) TestAddons/parallel/Headlamp: waiting 8m0s for pods matching "app.kubernetes.io/name=headlamp" in namespace "headlamp" ...
helpers_test.go:353: "headlamp-dfcdc64b-24f9q" [238c780b-3663-4e81-9ff8-c48d32a32441] Pending
helpers_test.go:353: "headlamp-dfcdc64b-24f9q" [238c780b-3663-4e81-9ff8-c48d32a32441] Pending / Ready:ContainersNotReady (containers with unready status: [headlamp]) / ContainersReady:ContainersNotReady (containers with unready status: [headlamp])
helpers_test.go:353: "headlamp-dfcdc64b-24f9q" [238c780b-3663-4e81-9ff8-c48d32a32441] Running
addons_test.go:815: (dbg) TestAddons/parallel/Headlamp: app.kubernetes.io/name=headlamp healthy within 16.015636486s
addons_test.go:1055: (dbg) Run:  out/minikube-linux-amd64 -p addons-616728 addons disable headlamp --alsologtostderr -v=1
addons_test.go:1055: (dbg) Done: out/minikube-linux-amd64 -p addons-616728 addons disable headlamp --alsologtostderr -v=1: (5.631185416s)
--- PASS: TestAddons/parallel/Headlamp (22.51s)

                                                
                                    
x
+
TestAddons/parallel/CloudSpanner (6.44s)

                                                
                                                
=== RUN   TestAddons/parallel/CloudSpanner
=== PAUSE TestAddons/parallel/CloudSpanner

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/CloudSpanner
addons_test.go:842: (dbg) TestAddons/parallel/CloudSpanner: waiting 6m0s for pods matching "app=cloud-spanner-emulator" in namespace "default" ...
helpers_test.go:353: "cloud-spanner-emulator-5bdddb765-kk4l9" [cb206998-71ee-4b6c-baad-a014f7a4c911] Running
addons_test.go:842: (dbg) TestAddons/parallel/CloudSpanner: app=cloud-spanner-emulator healthy within 6.004213598s
addons_test.go:1055: (dbg) Run:  out/minikube-linux-amd64 -p addons-616728 addons disable cloud-spanner --alsologtostderr -v=1
--- PASS: TestAddons/parallel/CloudSpanner (6.44s)

                                                
                                    
x
+
TestAddons/parallel/LocalPath (11s)

                                                
                                                
=== RUN   TestAddons/parallel/LocalPath
=== PAUSE TestAddons/parallel/LocalPath

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/LocalPath
addons_test.go:951: (dbg) Run:  kubectl --context addons-616728 apply -f testdata/storage-provisioner-rancher/pvc.yaml
addons_test.go:957: (dbg) Run:  kubectl --context addons-616728 apply -f testdata/storage-provisioner-rancher/pod.yaml
addons_test.go:961: (dbg) TestAddons/parallel/LocalPath: waiting 5m0s for pvc "test-pvc" in namespace "default" ...
helpers_test.go:403: (dbg) Run:  kubectl --context addons-616728 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:403: (dbg) Run:  kubectl --context addons-616728 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:403: (dbg) Run:  kubectl --context addons-616728 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:403: (dbg) Run:  kubectl --context addons-616728 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:403: (dbg) Run:  kubectl --context addons-616728 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:403: (dbg) Run:  kubectl --context addons-616728 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:403: (dbg) Run:  kubectl --context addons-616728 get pvc test-pvc -o jsonpath={.status.phase} -n default
addons_test.go:964: (dbg) TestAddons/parallel/LocalPath: waiting 3m0s for pods matching "run=test-local-path" in namespace "default" ...
helpers_test.go:353: "test-local-path" [bba11516-a991-45e2-be55-48c1c6189d61] Pending / Ready:ContainersNotReady (containers with unready status: [busybox]) / ContainersReady:ContainersNotReady (containers with unready status: [busybox])
helpers_test.go:353: "test-local-path" [bba11516-a991-45e2-be55-48c1c6189d61] Pending / Initialized:PodCompleted / Ready:PodCompleted / ContainersReady:PodCompleted
helpers_test.go:353: "test-local-path" [bba11516-a991-45e2-be55-48c1c6189d61] Succeeded / Initialized:PodCompleted / Ready:PodCompleted / ContainersReady:PodCompleted
addons_test.go:964: (dbg) TestAddons/parallel/LocalPath: run=test-local-path healthy within 4.003533904s
addons_test.go:969: (dbg) Run:  kubectl --context addons-616728 get pvc test-pvc -o=json
addons_test.go:978: (dbg) Run:  out/minikube-linux-amd64 -p addons-616728 ssh "cat /opt/local-path-provisioner/pvc-42857347-2dfd-4e09-bc96-f62f44caf143_default_test-pvc/file1"
addons_test.go:990: (dbg) Run:  kubectl --context addons-616728 delete pod test-local-path
addons_test.go:994: (dbg) Run:  kubectl --context addons-616728 delete pvc test-pvc
addons_test.go:1055: (dbg) Run:  out/minikube-linux-amd64 -p addons-616728 addons disable storage-provisioner-rancher --alsologtostderr -v=1
--- PASS: TestAddons/parallel/LocalPath (11.00s)

                                                
                                    
x
+
TestAddons/parallel/NvidiaDevicePlugin (6.37s)

                                                
                                                
=== RUN   TestAddons/parallel/NvidiaDevicePlugin
=== PAUSE TestAddons/parallel/NvidiaDevicePlugin

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/NvidiaDevicePlugin
addons_test.go:1027: (dbg) TestAddons/parallel/NvidiaDevicePlugin: waiting 6m0s for pods matching "name=nvidia-device-plugin-ds" in namespace "kube-system" ...
helpers_test.go:353: "nvidia-device-plugin-daemonset-2p5h2" [8e327f5f-6635-4847-9fbf-782eb89951dd] Running
addons_test.go:1027: (dbg) TestAddons/parallel/NvidiaDevicePlugin: name=nvidia-device-plugin-ds healthy within 6.004050122s
addons_test.go:1055: (dbg) Run:  out/minikube-linux-amd64 -p addons-616728 addons disable nvidia-device-plugin --alsologtostderr -v=1
--- PASS: TestAddons/parallel/NvidiaDevicePlugin (6.37s)

                                                
                                    
x
+
TestAddons/parallel/Yakd (10.67s)

                                                
                                                
=== RUN   TestAddons/parallel/Yakd
=== PAUSE TestAddons/parallel/Yakd

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/Yakd
addons_test.go:1049: (dbg) TestAddons/parallel/Yakd: waiting 2m0s for pods matching "app.kubernetes.io/name=yakd-dashboard" in namespace "yakd-dashboard" ...
helpers_test.go:353: "yakd-dashboard-6654c87f9b-67dgk" [8c7f9bb5-a4ec-4857-a683-def1e8b76fb9] Running
addons_test.go:1049: (dbg) TestAddons/parallel/Yakd: app.kubernetes.io/name=yakd-dashboard healthy within 5.00583968s
addons_test.go:1055: (dbg) Run:  out/minikube-linux-amd64 -p addons-616728 addons disable yakd --alsologtostderr -v=1
addons_test.go:1055: (dbg) Done: out/minikube-linux-amd64 -p addons-616728 addons disable yakd --alsologtostderr -v=1: (5.663975603s)
--- PASS: TestAddons/parallel/Yakd (10.67s)

                                                
                                    
x
+
TestAddons/StoppedEnableDisable (13.78s)

                                                
                                                
=== RUN   TestAddons/StoppedEnableDisable
addons_test.go:174: (dbg) Run:  out/minikube-linux-amd64 stop -p addons-616728
addons_test.go:174: (dbg) Done: out/minikube-linux-amd64 stop -p addons-616728: (13.594046009s)
addons_test.go:178: (dbg) Run:  out/minikube-linux-amd64 addons enable dashboard -p addons-616728
addons_test.go:182: (dbg) Run:  out/minikube-linux-amd64 addons disable dashboard -p addons-616728
addons_test.go:187: (dbg) Run:  out/minikube-linux-amd64 addons disable gvisor -p addons-616728
--- PASS: TestAddons/StoppedEnableDisable (13.78s)

                                                
                                    
x
+
TestCertOptions (51.31s)

                                                
                                                
=== RUN   TestCertOptions
=== PAUSE TestCertOptions

                                                
                                                

                                                
                                                
=== CONT  TestCertOptions
cert_options_test.go:49: (dbg) Run:  out/minikube-linux-amd64 start -p cert-options-148682 --memory=3072 --apiserver-ips=127.0.0.1 --apiserver-ips=192.168.15.15 --apiserver-names=localhost --apiserver-names=www.google.com --apiserver-port=8555 --driver=kvm2  --container-runtime=docker
cert_options_test.go:49: (dbg) Done: out/minikube-linux-amd64 start -p cert-options-148682 --memory=3072 --apiserver-ips=127.0.0.1 --apiserver-ips=192.168.15.15 --apiserver-names=localhost --apiserver-names=www.google.com --apiserver-port=8555 --driver=kvm2  --container-runtime=docker: (50.057920316s)
cert_options_test.go:60: (dbg) Run:  out/minikube-linux-amd64 -p cert-options-148682 ssh "openssl x509 -text -noout -in /var/lib/minikube/certs/apiserver.crt"
cert_options_test.go:88: (dbg) Run:  kubectl --context cert-options-148682 config view
cert_options_test.go:100: (dbg) Run:  out/minikube-linux-amd64 ssh -p cert-options-148682 -- "sudo cat /etc/kubernetes/admin.conf"
helpers_test.go:176: Cleaning up "cert-options-148682" profile ...
helpers_test.go:179: (dbg) Run:  out/minikube-linux-amd64 delete -p cert-options-148682
--- PASS: TestCertOptions (51.31s)

                                                
                                    
x
+
TestCertExpiration (305.31s)

                                                
                                                
=== RUN   TestCertExpiration
=== PAUSE TestCertExpiration

                                                
                                                

                                                
                                                
=== CONT  TestCertExpiration
cert_options_test.go:123: (dbg) Run:  out/minikube-linux-amd64 start -p cert-expiration-925202 --memory=3072 --cert-expiration=3m --driver=kvm2  --container-runtime=docker
cert_options_test.go:123: (dbg) Done: out/minikube-linux-amd64 start -p cert-expiration-925202 --memory=3072 --cert-expiration=3m --driver=kvm2  --container-runtime=docker: (1m0.06283109s)
cert_options_test.go:131: (dbg) Run:  out/minikube-linux-amd64 start -p cert-expiration-925202 --memory=3072 --cert-expiration=8760h --driver=kvm2  --container-runtime=docker
cert_options_test.go:131: (dbg) Done: out/minikube-linux-amd64 start -p cert-expiration-925202 --memory=3072 --cert-expiration=8760h --driver=kvm2  --container-runtime=docker: (1m4.385186388s)
helpers_test.go:176: Cleaning up "cert-expiration-925202" profile ...
helpers_test.go:179: (dbg) Run:  out/minikube-linux-amd64 delete -p cert-expiration-925202
--- PASS: TestCertExpiration (305.31s)

                                                
                                    
x
+
TestDockerFlags (51.93s)

                                                
                                                
=== RUN   TestDockerFlags
=== PAUSE TestDockerFlags

                                                
                                                

                                                
                                                
=== CONT  TestDockerFlags
docker_test.go:51: (dbg) Run:  out/minikube-linux-amd64 start -p docker-flags-261868 --cache-images=false --memory=3072 --install-addons=false --wait=false --docker-env=FOO=BAR --docker-env=BAZ=BAT --docker-opt=debug --docker-opt=icc=true --alsologtostderr -v=5 --driver=kvm2  --container-runtime=docker
E1220 02:05:18.900531   13018 cert_rotation.go:172] "Loading client cert failed" err="open /home/minitest/minikube-integration/7cd9f41b7421760cf1f1eaa8725bdb975037b06d-7160/.minikube/profiles/functional-562123/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
docker_test.go:51: (dbg) Done: out/minikube-linux-amd64 start -p docker-flags-261868 --cache-images=false --memory=3072 --install-addons=false --wait=false --docker-env=FOO=BAR --docker-env=BAZ=BAT --docker-opt=debug --docker-opt=icc=true --alsologtostderr -v=5 --driver=kvm2  --container-runtime=docker: (50.746920667s)
docker_test.go:56: (dbg) Run:  out/minikube-linux-amd64 -p docker-flags-261868 ssh "sudo systemctl show docker --property=Environment --no-pager"
docker_test.go:67: (dbg) Run:  out/minikube-linux-amd64 -p docker-flags-261868 ssh "sudo systemctl show docker --property=ExecStart --no-pager"
helpers_test.go:176: Cleaning up "docker-flags-261868" profile ...
helpers_test.go:179: (dbg) Run:  out/minikube-linux-amd64 delete -p docker-flags-261868
--- PASS: TestDockerFlags (51.93s)

                                                
                                    
x
+
TestForceSystemdFlag (86.03s)

                                                
                                                
=== RUN   TestForceSystemdFlag
=== PAUSE TestForceSystemdFlag

                                                
                                                

                                                
                                                
=== CONT  TestForceSystemdFlag
docker_test.go:91: (dbg) Run:  out/minikube-linux-amd64 start -p force-systemd-flag-827613 --memory=3072 --force-systemd --alsologtostderr -v=5 --driver=kvm2  --container-runtime=docker
docker_test.go:91: (dbg) Done: out/minikube-linux-amd64 start -p force-systemd-flag-827613 --memory=3072 --force-systemd --alsologtostderr -v=5 --driver=kvm2  --container-runtime=docker: (1m24.743837963s)
docker_test.go:110: (dbg) Run:  out/minikube-linux-amd64 -p force-systemd-flag-827613 ssh "docker info --format {{.CgroupDriver}}"
helpers_test.go:176: Cleaning up "force-systemd-flag-827613" profile ...
helpers_test.go:179: (dbg) Run:  out/minikube-linux-amd64 delete -p force-systemd-flag-827613
helpers_test.go:179: (dbg) Done: out/minikube-linux-amd64 delete -p force-systemd-flag-827613: (1.032634897s)
--- PASS: TestForceSystemdFlag (86.03s)

                                                
                                    
x
+
TestForceSystemdEnv (43.06s)

                                                
                                                
=== RUN   TestForceSystemdEnv
=== PAUSE TestForceSystemdEnv

                                                
                                                

                                                
                                                
=== CONT  TestForceSystemdEnv
docker_test.go:155: (dbg) Run:  out/minikube-linux-amd64 start -p force-systemd-env-426274 --memory=3072 --alsologtostderr -v=5 --driver=kvm2  --container-runtime=docker
docker_test.go:155: (dbg) Done: out/minikube-linux-amd64 start -p force-systemd-env-426274 --memory=3072 --alsologtostderr -v=5 --driver=kvm2  --container-runtime=docker: (42.083142351s)
docker_test.go:110: (dbg) Run:  out/minikube-linux-amd64 -p force-systemd-env-426274 ssh "docker info --format {{.CgroupDriver}}"
helpers_test.go:176: Cleaning up "force-systemd-env-426274" profile ...
helpers_test.go:179: (dbg) Run:  out/minikube-linux-amd64 delete -p force-systemd-env-426274
--- PASS: TestForceSystemdEnv (43.06s)

                                                
                                    
x
+
TestErrorSpam/setup (38.98s)

                                                
                                                
=== RUN   TestErrorSpam/setup
error_spam_test.go:81: (dbg) Run:  out/minikube-linux-amd64 start -p nospam-838510 -n=1 --memory=3072 --wait=false --log_dir=/tmp/nospam-838510 --driver=kvm2  --container-runtime=docker
error_spam_test.go:81: (dbg) Done: out/minikube-linux-amd64 start -p nospam-838510 -n=1 --memory=3072 --wait=false --log_dir=/tmp/nospam-838510 --driver=kvm2  --container-runtime=docker: (38.983527924s)
--- PASS: TestErrorSpam/setup (38.98s)

                                                
                                    
x
+
TestErrorSpam/start (0.33s)

                                                
                                                
=== RUN   TestErrorSpam/start
error_spam_test.go:206: Cleaning up 1 logfile(s) ...
error_spam_test.go:149: (dbg) Run:  out/minikube-linux-amd64 -p nospam-838510 --log_dir /tmp/nospam-838510 start --dry-run
error_spam_test.go:149: (dbg) Run:  out/minikube-linux-amd64 -p nospam-838510 --log_dir /tmp/nospam-838510 start --dry-run
error_spam_test.go:172: (dbg) Run:  out/minikube-linux-amd64 -p nospam-838510 --log_dir /tmp/nospam-838510 start --dry-run
--- PASS: TestErrorSpam/start (0.33s)

                                                
                                    
x
+
TestErrorSpam/status (0.67s)

                                                
                                                
=== RUN   TestErrorSpam/status
error_spam_test.go:206: Cleaning up 0 logfile(s) ...
error_spam_test.go:149: (dbg) Run:  out/minikube-linux-amd64 -p nospam-838510 --log_dir /tmp/nospam-838510 status
error_spam_test.go:149: (dbg) Run:  out/minikube-linux-amd64 -p nospam-838510 --log_dir /tmp/nospam-838510 status
error_spam_test.go:172: (dbg) Run:  out/minikube-linux-amd64 -p nospam-838510 --log_dir /tmp/nospam-838510 status
--- PASS: TestErrorSpam/status (0.67s)

                                                
                                    
x
+
TestErrorSpam/pause (1.28s)

                                                
                                                
=== RUN   TestErrorSpam/pause
error_spam_test.go:206: Cleaning up 0 logfile(s) ...
error_spam_test.go:149: (dbg) Run:  out/minikube-linux-amd64 -p nospam-838510 --log_dir /tmp/nospam-838510 pause
error_spam_test.go:149: (dbg) Run:  out/minikube-linux-amd64 -p nospam-838510 --log_dir /tmp/nospam-838510 pause
error_spam_test.go:172: (dbg) Run:  out/minikube-linux-amd64 -p nospam-838510 --log_dir /tmp/nospam-838510 pause
--- PASS: TestErrorSpam/pause (1.28s)

                                                
                                    
x
+
TestErrorSpam/unpause (1.5s)

                                                
                                                
=== RUN   TestErrorSpam/unpause
error_spam_test.go:206: Cleaning up 0 logfile(s) ...
error_spam_test.go:149: (dbg) Run:  out/minikube-linux-amd64 -p nospam-838510 --log_dir /tmp/nospam-838510 unpause
error_spam_test.go:149: (dbg) Run:  out/minikube-linux-amd64 -p nospam-838510 --log_dir /tmp/nospam-838510 unpause
error_spam_test.go:172: (dbg) Run:  out/minikube-linux-amd64 -p nospam-838510 --log_dir /tmp/nospam-838510 unpause
--- PASS: TestErrorSpam/unpause (1.50s)

                                                
                                    
x
+
TestErrorSpam/stop (6.69s)

                                                
                                                
=== RUN   TestErrorSpam/stop
error_spam_test.go:206: Cleaning up 0 logfile(s) ...
error_spam_test.go:149: (dbg) Run:  out/minikube-linux-amd64 -p nospam-838510 --log_dir /tmp/nospam-838510 stop
error_spam_test.go:149: (dbg) Done: out/minikube-linux-amd64 -p nospam-838510 --log_dir /tmp/nospam-838510 stop: (3.387862146s)
error_spam_test.go:149: (dbg) Run:  out/minikube-linux-amd64 -p nospam-838510 --log_dir /tmp/nospam-838510 stop
error_spam_test.go:149: (dbg) Done: out/minikube-linux-amd64 -p nospam-838510 --log_dir /tmp/nospam-838510 stop: (1.984660115s)
error_spam_test.go:172: (dbg) Run:  out/minikube-linux-amd64 -p nospam-838510 --log_dir /tmp/nospam-838510 stop
error_spam_test.go:172: (dbg) Done: out/minikube-linux-amd64 -p nospam-838510 --log_dir /tmp/nospam-838510 stop: (1.317409304s)
--- PASS: TestErrorSpam/stop (6.69s)

                                                
                                    
x
+
TestFunctional/serial/CopySyncFile (0s)

                                                
                                                
=== RUN   TestFunctional/serial/CopySyncFile
functional_test.go:1860: local sync path: /home/minitest/minikube-integration/7cd9f41b7421760cf1f1eaa8725bdb975037b06d-7160/.minikube/files/etc/test/nested/copy/13018/hosts
--- PASS: TestFunctional/serial/CopySyncFile (0.00s)

                                                
                                    
x
+
TestFunctional/serial/StartWithProxy (77.68s)

                                                
                                                
=== RUN   TestFunctional/serial/StartWithProxy
functional_test.go:2239: (dbg) Run:  out/minikube-linux-amd64 start -p functional-281340 --memory=4096 --apiserver-port=8441 --wait=all --driver=kvm2  --container-runtime=docker
functional_test.go:2239: (dbg) Done: out/minikube-linux-amd64 start -p functional-281340 --memory=4096 --apiserver-port=8441 --wait=all --driver=kvm2  --container-runtime=docker: (1m17.683944063s)
--- PASS: TestFunctional/serial/StartWithProxy (77.68s)

                                                
                                    
x
+
TestFunctional/serial/AuditLog (0s)

                                                
                                                
=== RUN   TestFunctional/serial/AuditLog
--- PASS: TestFunctional/serial/AuditLog (0.00s)

                                                
                                    
x
+
TestFunctional/serial/SoftStart (59.18s)

                                                
                                                
=== RUN   TestFunctional/serial/SoftStart
I1220 01:24:30.052257   13018 config.go:182] Loaded profile config "functional-281340": Driver=kvm2, ContainerRuntime=docker, KubernetesVersion=v1.34.3
functional_test.go:674: (dbg) Run:  out/minikube-linux-amd64 start -p functional-281340 --alsologtostderr -v=8
E1220 01:24:57.552410   13018 cert_rotation.go:172] "Loading client cert failed" err="open /home/minitest/minikube-integration/7cd9f41b7421760cf1f1eaa8725bdb975037b06d-7160/.minikube/profiles/addons-616728/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1220 01:24:57.557797   13018 cert_rotation.go:172] "Loading client cert failed" err="open /home/minitest/minikube-integration/7cd9f41b7421760cf1f1eaa8725bdb975037b06d-7160/.minikube/profiles/addons-616728/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1220 01:24:57.568102   13018 cert_rotation.go:172] "Loading client cert failed" err="open /home/minitest/minikube-integration/7cd9f41b7421760cf1f1eaa8725bdb975037b06d-7160/.minikube/profiles/addons-616728/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1220 01:24:57.588407   13018 cert_rotation.go:172] "Loading client cert failed" err="open /home/minitest/minikube-integration/7cd9f41b7421760cf1f1eaa8725bdb975037b06d-7160/.minikube/profiles/addons-616728/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1220 01:24:57.628748   13018 cert_rotation.go:172] "Loading client cert failed" err="open /home/minitest/minikube-integration/7cd9f41b7421760cf1f1eaa8725bdb975037b06d-7160/.minikube/profiles/addons-616728/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1220 01:24:57.709123   13018 cert_rotation.go:172] "Loading client cert failed" err="open /home/minitest/minikube-integration/7cd9f41b7421760cf1f1eaa8725bdb975037b06d-7160/.minikube/profiles/addons-616728/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1220 01:24:57.869611   13018 cert_rotation.go:172] "Loading client cert failed" err="open /home/minitest/minikube-integration/7cd9f41b7421760cf1f1eaa8725bdb975037b06d-7160/.minikube/profiles/addons-616728/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1220 01:24:58.190280   13018 cert_rotation.go:172] "Loading client cert failed" err="open /home/minitest/minikube-integration/7cd9f41b7421760cf1f1eaa8725bdb975037b06d-7160/.minikube/profiles/addons-616728/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1220 01:24:58.831265   13018 cert_rotation.go:172] "Loading client cert failed" err="open /home/minitest/minikube-integration/7cd9f41b7421760cf1f1eaa8725bdb975037b06d-7160/.minikube/profiles/addons-616728/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1220 01:25:00.111574   13018 cert_rotation.go:172] "Loading client cert failed" err="open /home/minitest/minikube-integration/7cd9f41b7421760cf1f1eaa8725bdb975037b06d-7160/.minikube/profiles/addons-616728/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1220 01:25:02.672349   13018 cert_rotation.go:172] "Loading client cert failed" err="open /home/minitest/minikube-integration/7cd9f41b7421760cf1f1eaa8725bdb975037b06d-7160/.minikube/profiles/addons-616728/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1220 01:25:07.793262   13018 cert_rotation.go:172] "Loading client cert failed" err="open /home/minitest/minikube-integration/7cd9f41b7421760cf1f1eaa8725bdb975037b06d-7160/.minikube/profiles/addons-616728/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1220 01:25:18.033639   13018 cert_rotation.go:172] "Loading client cert failed" err="open /home/minitest/minikube-integration/7cd9f41b7421760cf1f1eaa8725bdb975037b06d-7160/.minikube/profiles/addons-616728/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
functional_test.go:674: (dbg) Done: out/minikube-linux-amd64 start -p functional-281340 --alsologtostderr -v=8: (59.177985867s)
functional_test.go:678: soft start took 59.178690086s for "functional-281340" cluster.
I1220 01:25:29.230615   13018 config.go:182] Loaded profile config "functional-281340": Driver=kvm2, ContainerRuntime=docker, KubernetesVersion=v1.34.3
--- PASS: TestFunctional/serial/SoftStart (59.18s)

                                                
                                    
x
+
TestFunctional/serial/KubeContext (0.05s)

                                                
                                                
=== RUN   TestFunctional/serial/KubeContext
functional_test.go:696: (dbg) Run:  kubectl config current-context
--- PASS: TestFunctional/serial/KubeContext (0.05s)

                                                
                                    
x
+
TestFunctional/serial/KubectlGetPods (0.07s)

                                                
                                                
=== RUN   TestFunctional/serial/KubectlGetPods
functional_test.go:711: (dbg) Run:  kubectl --context functional-281340 get po -A
--- PASS: TestFunctional/serial/KubectlGetPods (0.07s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/add_remote (2.18s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/add_remote
functional_test.go:1064: (dbg) Run:  out/minikube-linux-amd64 -p functional-281340 cache add registry.k8s.io/pause:3.1
functional_test.go:1064: (dbg) Run:  out/minikube-linux-amd64 -p functional-281340 cache add registry.k8s.io/pause:3.3
functional_test.go:1064: (dbg) Run:  out/minikube-linux-amd64 -p functional-281340 cache add registry.k8s.io/pause:latest
--- PASS: TestFunctional/serial/CacheCmd/cache/add_remote (2.18s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/add_local (1.17s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/add_local
functional_test.go:1092: (dbg) Run:  docker build -t minikube-local-cache-test:functional-281340 /tmp/TestFunctionalserialCacheCmdcacheadd_local799544625/001
functional_test.go:1104: (dbg) Run:  out/minikube-linux-amd64 -p functional-281340 cache add minikube-local-cache-test:functional-281340
functional_test.go:1109: (dbg) Run:  out/minikube-linux-amd64 -p functional-281340 cache delete minikube-local-cache-test:functional-281340
functional_test.go:1098: (dbg) Run:  docker rmi minikube-local-cache-test:functional-281340
--- PASS: TestFunctional/serial/CacheCmd/cache/add_local (1.17s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/CacheDelete (0.06s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/CacheDelete
functional_test.go:1117: (dbg) Run:  out/minikube-linux-amd64 cache delete registry.k8s.io/pause:3.3
--- PASS: TestFunctional/serial/CacheCmd/cache/CacheDelete (0.06s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/list (0.06s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/list
functional_test.go:1125: (dbg) Run:  out/minikube-linux-amd64 cache list
--- PASS: TestFunctional/serial/CacheCmd/cache/list (0.06s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/verify_cache_inside_node (0.18s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/verify_cache_inside_node
functional_test.go:1139: (dbg) Run:  out/minikube-linux-amd64 -p functional-281340 ssh sudo crictl images
--- PASS: TestFunctional/serial/CacheCmd/cache/verify_cache_inside_node (0.18s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/cache_reload (1.01s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/cache_reload
functional_test.go:1162: (dbg) Run:  out/minikube-linux-amd64 -p functional-281340 ssh sudo docker rmi registry.k8s.io/pause:latest
functional_test.go:1168: (dbg) Run:  out/minikube-linux-amd64 -p functional-281340 ssh sudo crictl inspecti registry.k8s.io/pause:latest
functional_test.go:1168: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-281340 ssh sudo crictl inspecti registry.k8s.io/pause:latest: exit status 1 (170.376907ms)

                                                
                                                
-- stdout --
	FATA[0000] no such image "registry.k8s.io/pause:latest" present 

                                                
                                                
-- /stdout --
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
functional_test.go:1173: (dbg) Run:  out/minikube-linux-amd64 -p functional-281340 cache reload
functional_test.go:1178: (dbg) Run:  out/minikube-linux-amd64 -p functional-281340 ssh sudo crictl inspecti registry.k8s.io/pause:latest
--- PASS: TestFunctional/serial/CacheCmd/cache/cache_reload (1.01s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/delete (0.11s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/delete
functional_test.go:1187: (dbg) Run:  out/minikube-linux-amd64 cache delete registry.k8s.io/pause:3.1
functional_test.go:1187: (dbg) Run:  out/minikube-linux-amd64 cache delete registry.k8s.io/pause:latest
--- PASS: TestFunctional/serial/CacheCmd/cache/delete (0.11s)

                                                
                                    
x
+
TestFunctional/serial/MinikubeKubectlCmd (0.11s)

                                                
                                                
=== RUN   TestFunctional/serial/MinikubeKubectlCmd
functional_test.go:731: (dbg) Run:  out/minikube-linux-amd64 -p functional-281340 kubectl -- --context functional-281340 get pods
--- PASS: TestFunctional/serial/MinikubeKubectlCmd (0.11s)

                                                
                                    
x
+
TestFunctional/serial/MinikubeKubectlCmdDirectly (0.11s)

                                                
                                                
=== RUN   TestFunctional/serial/MinikubeKubectlCmdDirectly
functional_test.go:756: (dbg) Run:  out/kubectl --context functional-281340 get pods
--- PASS: TestFunctional/serial/MinikubeKubectlCmdDirectly (0.11s)

                                                
                                    
x
+
TestFunctional/serial/ExtraConfig (52.97s)

                                                
                                                
=== RUN   TestFunctional/serial/ExtraConfig
functional_test.go:772: (dbg) Run:  out/minikube-linux-amd64 start -p functional-281340 --extra-config=apiserver.enable-admission-plugins=NamespaceAutoProvision --wait=all
E1220 01:25:38.514327   13018 cert_rotation.go:172] "Loading client cert failed" err="open /home/minitest/minikube-integration/7cd9f41b7421760cf1f1eaa8725bdb975037b06d-7160/.minikube/profiles/addons-616728/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1220 01:26:19.476282   13018 cert_rotation.go:172] "Loading client cert failed" err="open /home/minitest/minikube-integration/7cd9f41b7421760cf1f1eaa8725bdb975037b06d-7160/.minikube/profiles/addons-616728/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
functional_test.go:772: (dbg) Done: out/minikube-linux-amd64 start -p functional-281340 --extra-config=apiserver.enable-admission-plugins=NamespaceAutoProvision --wait=all: (52.969004156s)
functional_test.go:776: restart took 52.969091863s for "functional-281340" cluster.
I1220 01:26:27.329787   13018 config.go:182] Loaded profile config "functional-281340": Driver=kvm2, ContainerRuntime=docker, KubernetesVersion=v1.34.3
--- PASS: TestFunctional/serial/ExtraConfig (52.97s)

                                                
                                    
x
+
TestFunctional/serial/ComponentHealth (0.07s)

                                                
                                                
=== RUN   TestFunctional/serial/ComponentHealth
functional_test.go:825: (dbg) Run:  kubectl --context functional-281340 get po -l tier=control-plane -n kube-system -o=json
functional_test.go:840: etcd phase: Running
functional_test.go:850: etcd status: Ready
functional_test.go:840: kube-apiserver phase: Running
functional_test.go:850: kube-apiserver status: Ready
functional_test.go:840: kube-controller-manager phase: Running
functional_test.go:850: kube-controller-manager status: Ready
functional_test.go:840: kube-scheduler phase: Running
functional_test.go:850: kube-scheduler status: Ready
--- PASS: TestFunctional/serial/ComponentHealth (0.07s)

                                                
                                    
x
+
TestFunctional/serial/LogsCmd (0.99s)

                                                
                                                
=== RUN   TestFunctional/serial/LogsCmd
functional_test.go:1251: (dbg) Run:  out/minikube-linux-amd64 -p functional-281340 logs
--- PASS: TestFunctional/serial/LogsCmd (0.99s)

                                                
                                    
x
+
TestFunctional/serial/LogsFileCmd (0.95s)

                                                
                                                
=== RUN   TestFunctional/serial/LogsFileCmd
functional_test.go:1265: (dbg) Run:  out/minikube-linux-amd64 -p functional-281340 logs --file /tmp/TestFunctionalserialLogsFileCmd4138578582/001/logs.txt
--- PASS: TestFunctional/serial/LogsFileCmd (0.95s)

                                                
                                    
x
+
TestFunctional/serial/InvalidService (4.26s)

                                                
                                                
=== RUN   TestFunctional/serial/InvalidService
functional_test.go:2326: (dbg) Run:  kubectl --context functional-281340 apply -f testdata/invalidsvc.yaml
functional_test.go:2340: (dbg) Run:  out/minikube-linux-amd64 service invalid-svc -p functional-281340
functional_test.go:2340: (dbg) Non-zero exit: out/minikube-linux-amd64 service invalid-svc -p functional-281340: exit status 115 (237.646671ms)

                                                
                                                
-- stdout --
	┌───────────┬─────────────┬─────────────┬────────────────────────────┐
	│ NAMESPACE │    NAME     │ TARGET PORT │            URL             │
	├───────────┼─────────────┼─────────────┼────────────────────────────┤
	│ default   │ invalid-svc │ 80          │ http://192.168.39.31:31694 │
	└───────────┴─────────────┴─────────────┴────────────────────────────┘
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to SVC_UNREACHABLE: service not available: no running pod for service invalid-svc found
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_service_1a9e407b3012cd2729ac720152316fb3398a8e6b_0.log                 │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
functional_test.go:2332: (dbg) Run:  kubectl --context functional-281340 delete -f testdata/invalidsvc.yaml
--- PASS: TestFunctional/serial/InvalidService (4.26s)

                                                
                                    
x
+
TestFunctional/parallel/ConfigCmd (0.42s)

                                                
                                                
=== RUN   TestFunctional/parallel/ConfigCmd
=== PAUSE TestFunctional/parallel/ConfigCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ConfigCmd
functional_test.go:1214: (dbg) Run:  out/minikube-linux-amd64 -p functional-281340 config unset cpus
functional_test.go:1214: (dbg) Run:  out/minikube-linux-amd64 -p functional-281340 config get cpus
functional_test.go:1214: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-281340 config get cpus: exit status 14 (66.464858ms)

                                                
                                                
** stderr ** 
	Error: specified key could not be found in config

                                                
                                                
** /stderr **
functional_test.go:1214: (dbg) Run:  out/minikube-linux-amd64 -p functional-281340 config set cpus 2
functional_test.go:1214: (dbg) Run:  out/minikube-linux-amd64 -p functional-281340 config get cpus
functional_test.go:1214: (dbg) Run:  out/minikube-linux-amd64 -p functional-281340 config unset cpus
functional_test.go:1214: (dbg) Run:  out/minikube-linux-amd64 -p functional-281340 config get cpus
functional_test.go:1214: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-281340 config get cpus: exit status 14 (73.54907ms)

                                                
                                                
** stderr ** 
	Error: specified key could not be found in config

                                                
                                                
** /stderr **
--- PASS: TestFunctional/parallel/ConfigCmd (0.42s)

                                                
                                    
x
+
TestFunctional/parallel/DashboardCmd (14.71s)

                                                
                                                
=== RUN   TestFunctional/parallel/DashboardCmd
=== PAUSE TestFunctional/parallel/DashboardCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/DashboardCmd
functional_test.go:920: (dbg) daemon: [out/minikube-linux-amd64 dashboard --url --port 36195 -p functional-281340 --alsologtostderr -v=1]
functional_test.go:925: (dbg) stopping [out/minikube-linux-amd64 dashboard --url --port 36195 -p functional-281340 --alsologtostderr -v=1] ...
helpers_test.go:526: unable to kill pid 16864: os: process already finished
--- PASS: TestFunctional/parallel/DashboardCmd (14.71s)

                                                
                                    
x
+
TestFunctional/parallel/DryRun (0.24s)

                                                
                                                
=== RUN   TestFunctional/parallel/DryRun
=== PAUSE TestFunctional/parallel/DryRun

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/DryRun
functional_test.go:989: (dbg) Run:  out/minikube-linux-amd64 start -p functional-281340 --dry-run --memory 250MB --alsologtostderr --driver=kvm2  --container-runtime=docker
functional_test.go:989: (dbg) Non-zero exit: out/minikube-linux-amd64 start -p functional-281340 --dry-run --memory 250MB --alsologtostderr --driver=kvm2  --container-runtime=docker: exit status 23 (114.86738ms)

                                                
                                                
-- stdout --
	* [functional-281340] minikube v1.37.0 on Ubuntu 24.04 (kvm/amd64)
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/minitest/minikube-integration/7cd9f41b7421760cf1f1eaa8725bdb975037b06d-7160/kubeconfig
	  - MINIKUBE_HOME=/home/minitest/minikube-integration/7cd9f41b7421760cf1f1eaa8725bdb975037b06d-7160/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-amd64
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the kvm2 driver based on existing profile
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1220 01:26:35.611495   16756 out.go:360] Setting OutFile to fd 1 ...
	I1220 01:26:35.611727   16756 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1220 01:26:35.611740   16756 out.go:374] Setting ErrFile to fd 2...
	I1220 01:26:35.611747   16756 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1220 01:26:35.611982   16756 root.go:338] Updating PATH: /home/minitest/minikube-integration/7cd9f41b7421760cf1f1eaa8725bdb975037b06d-7160/.minikube/bin
	I1220 01:26:35.612552   16756 out.go:368] Setting JSON to false
	I1220 01:26:35.613464   16756 start.go:133] hostinfo: {"hostname":"minitest-vm-9d09530a.c.k8s-infra-e2e-boskos-103.internal","uptime":709,"bootTime":1766193287,"procs":182,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"24.04","kernelVersion":"6.14.0-1021-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"324b1d65-3a78-4886-9ab4-95ed3c96a31c"}
	I1220 01:26:35.613571   16756 start.go:143] virtualization: kvm guest
	I1220 01:26:35.615526   16756 out.go:179] * [functional-281340] minikube v1.37.0 on Ubuntu 24.04 (kvm/amd64)
	I1220 01:26:35.617024   16756 notify.go:221] Checking for updates...
	I1220 01:26:35.617033   16756 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1220 01:26:35.618465   16756 out.go:179]   - KUBECONFIG=/home/minitest/minikube-integration/7cd9f41b7421760cf1f1eaa8725bdb975037b06d-7160/kubeconfig
	I1220 01:26:35.619935   16756 out.go:179]   - MINIKUBE_HOME=/home/minitest/minikube-integration/7cd9f41b7421760cf1f1eaa8725bdb975037b06d-7160/.minikube
	I1220 01:26:35.621041   16756 out.go:179]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I1220 01:26:35.622450   16756 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I1220 01:26:35.623955   16756 config.go:182] Loaded profile config "functional-281340": Driver=kvm2, ContainerRuntime=docker, KubernetesVersion=v1.34.3
	I1220 01:26:35.624498   16756 driver.go:422] Setting default libvirt URI to qemu:///system
	I1220 01:26:35.657946   16756 out.go:179] * Using the kvm2 driver based on existing profile
	I1220 01:26:35.659360   16756 start.go:309] selected driver: kvm2
	I1220 01:26:35.659379   16756 start.go:928] validating driver "kvm2" against &{Name:functional-281340 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/22186/minikube-v1.37.0-1765965980-22186-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765966054-22186@sha256:1c173489767e6632c410d2554f1a2272f032a423dd528157e201daadfe3c43f0 Memory:4096 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kuber
netesVersion:v1.34.3 ClusterName:functional-281340 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:apiserver Key:enable-admission-plugins Value:NamespaceAutoProvision}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.31 Port:8441 KubernetesVersion:v1.34.3 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s
MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1220 01:26:35.659480   16756 start.go:939] status for kvm2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1220 01:26:35.661657   16756 out.go:203] 
	W1220 01:26:35.663317   16756 out.go:285] X Exiting due to RSRC_INSUFFICIENT_REQ_MEMORY: Requested memory allocation 250MiB is less than the usable minimum of 1800MB
	X Exiting due to RSRC_INSUFFICIENT_REQ_MEMORY: Requested memory allocation 250MiB is less than the usable minimum of 1800MB
	I1220 01:26:35.664850   16756 out.go:203] 

                                                
                                                
** /stderr **
functional_test.go:1006: (dbg) Run:  out/minikube-linux-amd64 start -p functional-281340 --dry-run --alsologtostderr -v=1 --driver=kvm2  --container-runtime=docker
--- PASS: TestFunctional/parallel/DryRun (0.24s)

                                                
                                    
x
+
TestFunctional/parallel/InternationalLanguage (0.13s)

                                                
                                                
=== RUN   TestFunctional/parallel/InternationalLanguage
=== PAUSE TestFunctional/parallel/InternationalLanguage

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/InternationalLanguage
functional_test.go:1035: (dbg) Run:  out/minikube-linux-amd64 start -p functional-281340 --dry-run --memory 250MB --alsologtostderr --driver=kvm2  --container-runtime=docker
functional_test.go:1035: (dbg) Non-zero exit: out/minikube-linux-amd64 start -p functional-281340 --dry-run --memory 250MB --alsologtostderr --driver=kvm2  --container-runtime=docker: exit status 23 (126.58539ms)

                                                
                                                
-- stdout --
	* [functional-281340] minikube v1.37.0 sur Ubuntu 24.04 (kvm/amd64)
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/minitest/minikube-integration/7cd9f41b7421760cf1f1eaa8725bdb975037b06d-7160/kubeconfig
	  - MINIKUBE_HOME=/home/minitest/minikube-integration/7cd9f41b7421760cf1f1eaa8725bdb975037b06d-7160/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-amd64
	  - MINIKUBE_FORCE_SYSTEMD=
	* Utilisation du pilote kvm2 basé sur le profil existant
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1220 01:26:34.344774   16503 out.go:360] Setting OutFile to fd 1 ...
	I1220 01:26:34.345011   16503 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1220 01:26:34.345022   16503 out.go:374] Setting ErrFile to fd 2...
	I1220 01:26:34.345025   16503 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1220 01:26:34.345443   16503 root.go:338] Updating PATH: /home/minitest/minikube-integration/7cd9f41b7421760cf1f1eaa8725bdb975037b06d-7160/.minikube/bin
	I1220 01:26:34.346165   16503 out.go:368] Setting JSON to false
	I1220 01:26:34.347234   16503 start.go:133] hostinfo: {"hostname":"minitest-vm-9d09530a.c.k8s-infra-e2e-boskos-103.internal","uptime":707,"bootTime":1766193287,"procs":183,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"24.04","kernelVersion":"6.14.0-1021-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"324b1d65-3a78-4886-9ab4-95ed3c96a31c"}
	I1220 01:26:34.347333   16503 start.go:143] virtualization: kvm guest
	I1220 01:26:34.348606   16503 out.go:179] * [functional-281340] minikube v1.37.0 sur Ubuntu 24.04 (kvm/amd64)
	I1220 01:26:34.349830   16503 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1220 01:26:34.349841   16503 notify.go:221] Checking for updates...
	I1220 01:26:34.351782   16503 out.go:179]   - KUBECONFIG=/home/minitest/minikube-integration/7cd9f41b7421760cf1f1eaa8725bdb975037b06d-7160/kubeconfig
	I1220 01:26:34.352932   16503 out.go:179]   - MINIKUBE_HOME=/home/minitest/minikube-integration/7cd9f41b7421760cf1f1eaa8725bdb975037b06d-7160/.minikube
	I1220 01:26:34.354173   16503 out.go:179]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I1220 01:26:34.355352   16503 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I1220 01:26:34.356986   16503 config.go:182] Loaded profile config "functional-281340": Driver=kvm2, ContainerRuntime=docker, KubernetesVersion=v1.34.3
	I1220 01:26:34.357523   16503 driver.go:422] Setting default libvirt URI to qemu:///system
	I1220 01:26:34.391220   16503 out.go:179] * Utilisation du pilote kvm2 basé sur le profil existant
	I1220 01:26:34.392428   16503 start.go:309] selected driver: kvm2
	I1220 01:26:34.392447   16503 start.go:928] validating driver "kvm2" against &{Name:functional-281340 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/22186/minikube-v1.37.0-1765965980-22186-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765966054-22186@sha256:1c173489767e6632c410d2554f1a2272f032a423dd528157e201daadfe3c43f0 Memory:4096 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kuber
netesVersion:v1.34.3 ClusterName:functional-281340 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:apiserver Key:enable-admission-plugins Value:NamespaceAutoProvision}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.31 Port:8441 KubernetesVersion:v1.34.3 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s
MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1220 01:26:34.392599   16503 start.go:939] status for kvm2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1220 01:26:34.395147   16503 out.go:203] 
	W1220 01:26:34.396346   16503 out.go:285] X Fermeture en raison de RSRC_INSUFFICIENT_REQ_MEMORY : L'allocation de mémoire demandée 250 Mio est inférieure au minimum utilisable de 1800 Mo
	X Fermeture en raison de RSRC_INSUFFICIENT_REQ_MEMORY : L'allocation de mémoire demandée 250 Mio est inférieure au minimum utilisable de 1800 Mo
	I1220 01:26:34.398711   16503 out.go:203] 

                                                
                                                
** /stderr **
--- PASS: TestFunctional/parallel/InternationalLanguage (0.13s)

                                                
                                    
x
+
TestFunctional/parallel/StatusCmd (0.83s)

                                                
                                                
=== RUN   TestFunctional/parallel/StatusCmd
=== PAUSE TestFunctional/parallel/StatusCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/StatusCmd
functional_test.go:869: (dbg) Run:  out/minikube-linux-amd64 -p functional-281340 status
functional_test.go:875: (dbg) Run:  out/minikube-linux-amd64 -p functional-281340 status -f host:{{.Host}},kublet:{{.Kubelet}},apiserver:{{.APIServer}},kubeconfig:{{.Kubeconfig}}
functional_test.go:887: (dbg) Run:  out/minikube-linux-amd64 -p functional-281340 status -o json
--- PASS: TestFunctional/parallel/StatusCmd (0.83s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmdConnect (31.55s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmdConnect
=== PAUSE TestFunctional/parallel/ServiceCmdConnect

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ServiceCmdConnect
functional_test.go:1636: (dbg) Run:  kubectl --context functional-281340 create deployment hello-node-connect --image kicbase/echo-server
functional_test.go:1640: (dbg) Run:  kubectl --context functional-281340 expose deployment hello-node-connect --type=NodePort --port=8080
functional_test.go:1645: (dbg) TestFunctional/parallel/ServiceCmdConnect: waiting 10m0s for pods matching "app=hello-node-connect" in namespace "default" ...
helpers_test.go:353: "hello-node-connect-7d85dfc575-zp7kx" [23412de9-c9ba-482e-84ea-dc4b948b401f] Pending / Ready:ContainersNotReady (containers with unready status: [echo-server]) / ContainersReady:ContainersNotReady (containers with unready status: [echo-server])
2025/12/20 01:26:50 [DEBUG] GET http://127.0.0.1:36195/api/v1/namespaces/kubernetes-dashboard/services/http:kubernetes-dashboard:/proxy/
helpers_test.go:353: "hello-node-connect-7d85dfc575-zp7kx" [23412de9-c9ba-482e-84ea-dc4b948b401f] Running
functional_test.go:1645: (dbg) TestFunctional/parallel/ServiceCmdConnect: app=hello-node-connect healthy within 31.008462044s
functional_test.go:1654: (dbg) Run:  out/minikube-linux-amd64 -p functional-281340 service hello-node-connect --url
functional_test.go:1660: found endpoint for hello-node-connect: http://192.168.39.31:31505
functional_test.go:1680: http://192.168.39.31:31505: success! body:
Request served by hello-node-connect-7d85dfc575-zp7kx

                                                
                                                
HTTP/1.1 GET /

                                                
                                                
Host: 192.168.39.31:31505
Accept-Encoding: gzip
User-Agent: Go-http-client/1.1
--- PASS: TestFunctional/parallel/ServiceCmdConnect (31.55s)

                                                
                                    
x
+
TestFunctional/parallel/AddonsCmd (0.16s)

                                                
                                                
=== RUN   TestFunctional/parallel/AddonsCmd
=== PAUSE TestFunctional/parallel/AddonsCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/AddonsCmd
functional_test.go:1695: (dbg) Run:  out/minikube-linux-amd64 -p functional-281340 addons list
functional_test.go:1707: (dbg) Run:  out/minikube-linux-amd64 -p functional-281340 addons list -o json
--- PASS: TestFunctional/parallel/AddonsCmd (0.16s)

                                                
                                    
x
+
TestFunctional/parallel/PersistentVolumeClaim (51.42s)

                                                
                                                
=== RUN   TestFunctional/parallel/PersistentVolumeClaim
=== PAUSE TestFunctional/parallel/PersistentVolumeClaim

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/PersistentVolumeClaim
functional_test_pvc_test.go:50: (dbg) TestFunctional/parallel/PersistentVolumeClaim: waiting 4m0s for pods matching "integration-test=storage-provisioner" in namespace "kube-system" ...
helpers_test.go:353: "storage-provisioner" [274925d1-0ea5-42f4-b722-0162cbab1c6c] Running
functional_test_pvc_test.go:50: (dbg) TestFunctional/parallel/PersistentVolumeClaim: integration-test=storage-provisioner healthy within 6.004745722s
functional_test_pvc_test.go:55: (dbg) Run:  kubectl --context functional-281340 get storageclass -o=json
functional_test_pvc_test.go:75: (dbg) Run:  kubectl --context functional-281340 apply -f testdata/storage-provisioner/pvc.yaml
functional_test_pvc_test.go:82: (dbg) Run:  kubectl --context functional-281340 get pvc myclaim -o=json
I1220 01:26:41.290063   13018 retry.go:31] will retry after 1.475815915s: testpvc phase = "Pending", want "Bound" (msg={TypeMeta:{Kind:PersistentVolumeClaim APIVersion:v1} ObjectMeta:{Name:myclaim GenerateName: Namespace:default SelfLink: UID:05ed70c4-0bf7-499a-81c8-3aa88d40d249 ResourceVersion:777 Generation:0 CreationTimestamp:2025-12-20 01:26:41 +0000 UTC DeletionTimestamp:<nil> DeletionGracePeriodSeconds:<nil> Labels:map[] Annotations:map[kubectl.kubernetes.io/last-applied-configuration:{"apiVersion":"v1","kind":"PersistentVolumeClaim","metadata":{"annotations":{},"name":"myclaim","namespace":"default"},"spec":{"accessModes":["ReadWriteOnce"],"resources":{"requests":{"storage":"500Mi"}},"volumeMode":"Filesystem"}}
volume.beta.kubernetes.io/storage-provisioner:k8s.io/minikube-hostpath volume.kubernetes.io/storage-provisioner:k8s.io/minikube-hostpath] OwnerReferences:[] Finalizers:[kubernetes.io/pvc-protection] ManagedFields:[]} Spec:{AccessModes:[ReadWriteOnce] Selector:nil Resources:{Limits:map[] Requests:map[storage:{i:{value:524288000 scale:0} d:{Dec:<nil>} s:500Mi Format:BinarySI}]} VolumeName: StorageClassName:0xc001719e60 VolumeMode:0xc001719e80 DataSource:nil DataSourceRef:nil VolumeAttributesClassName:<nil>} Status:{Phase:Pending AccessModes:[] Capacity:map[] Conditions:[] AllocatedResources:map[] AllocatedResourceStatuses:map[] CurrentVolumeAttributesClassName:<nil> ModifyVolumeStatus:nil}})
functional_test_pvc_test.go:82: (dbg) Run:  kubectl --context functional-281340 get pvc myclaim -o=json
functional_test_pvc_test.go:131: (dbg) Run:  kubectl --context functional-281340 apply -f testdata/storage-provisioner/pod.yaml
functional_test_pvc_test.go:140: (dbg) TestFunctional/parallel/PersistentVolumeClaim: waiting 6m0s for pods matching "test=storage-provisioner" in namespace "default" ...
helpers_test.go:353: "sp-pod" [95d47d8e-68ee-4dd0-9ac3-411c96f7ebb6] Pending
helpers_test.go:353: "sp-pod" [95d47d8e-68ee-4dd0-9ac3-411c96f7ebb6] Pending / Ready:ContainersNotReady (containers with unready status: [myfrontend]) / ContainersReady:ContainersNotReady (containers with unready status: [myfrontend])
helpers_test.go:353: "sp-pod" [95d47d8e-68ee-4dd0-9ac3-411c96f7ebb6] Running
functional_test_pvc_test.go:140: (dbg) TestFunctional/parallel/PersistentVolumeClaim: test=storage-provisioner healthy within 33.006255184s
functional_test_pvc_test.go:106: (dbg) Run:  kubectl --context functional-281340 exec sp-pod -- touch /tmp/mount/foo
functional_test_pvc_test.go:112: (dbg) Run:  kubectl --context functional-281340 delete -f testdata/storage-provisioner/pod.yaml
functional_test_pvc_test.go:112: (dbg) Done: kubectl --context functional-281340 delete -f testdata/storage-provisioner/pod.yaml: (1.787696937s)
functional_test_pvc_test.go:131: (dbg) Run:  kubectl --context functional-281340 apply -f testdata/storage-provisioner/pod.yaml
functional_test_pvc_test.go:140: (dbg) TestFunctional/parallel/PersistentVolumeClaim: waiting 6m0s for pods matching "test=storage-provisioner" in namespace "default" ...
helpers_test.go:353: "sp-pod" [5557ab59-eb7c-4cac-a473-7d87e2128bee] Pending
helpers_test.go:353: "sp-pod" [5557ab59-eb7c-4cac-a473-7d87e2128bee] Pending / Ready:ContainersNotReady (containers with unready status: [myfrontend]) / ContainersReady:ContainersNotReady (containers with unready status: [myfrontend])
helpers_test.go:353: "sp-pod" [5557ab59-eb7c-4cac-a473-7d87e2128bee] Running
functional_test_pvc_test.go:140: (dbg) TestFunctional/parallel/PersistentVolumeClaim: test=storage-provisioner healthy within 8.004178029s
functional_test_pvc_test.go:120: (dbg) Run:  kubectl --context functional-281340 exec sp-pod -- ls /tmp/mount
--- PASS: TestFunctional/parallel/PersistentVolumeClaim (51.42s)

                                                
                                    
x
+
TestFunctional/parallel/SSHCmd (0.35s)

                                                
                                                
=== RUN   TestFunctional/parallel/SSHCmd
=== PAUSE TestFunctional/parallel/SSHCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/SSHCmd
functional_test.go:1730: (dbg) Run:  out/minikube-linux-amd64 -p functional-281340 ssh "echo hello"
functional_test.go:1747: (dbg) Run:  out/minikube-linux-amd64 -p functional-281340 ssh "cat /etc/hostname"
--- PASS: TestFunctional/parallel/SSHCmd (0.35s)

                                                
                                    
x
+
TestFunctional/parallel/CpCmd (1.23s)

                                                
                                                
=== RUN   TestFunctional/parallel/CpCmd
=== PAUSE TestFunctional/parallel/CpCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/CpCmd
helpers_test.go:574: (dbg) Run:  out/minikube-linux-amd64 -p functional-281340 cp testdata/cp-test.txt /home/docker/cp-test.txt
helpers_test.go:552: (dbg) Run:  out/minikube-linux-amd64 -p functional-281340 ssh -n functional-281340 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:574: (dbg) Run:  out/minikube-linux-amd64 -p functional-281340 cp functional-281340:/home/docker/cp-test.txt /tmp/TestFunctionalparallelCpCmd4177328396/001/cp-test.txt
helpers_test.go:552: (dbg) Run:  out/minikube-linux-amd64 -p functional-281340 ssh -n functional-281340 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:574: (dbg) Run:  out/minikube-linux-amd64 -p functional-281340 cp testdata/cp-test.txt /tmp/does/not/exist/cp-test.txt
helpers_test.go:552: (dbg) Run:  out/minikube-linux-amd64 -p functional-281340 ssh -n functional-281340 "sudo cat /tmp/does/not/exist/cp-test.txt"
--- PASS: TestFunctional/parallel/CpCmd (1.23s)

                                                
                                    
x
+
TestFunctional/parallel/MySQL (43.67s)

                                                
                                                
=== RUN   TestFunctional/parallel/MySQL
=== PAUSE TestFunctional/parallel/MySQL

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/MySQL
functional_test.go:1798: (dbg) Run:  kubectl --context functional-281340 replace --force -f testdata/mysql.yaml
functional_test.go:1804: (dbg) TestFunctional/parallel/MySQL: waiting 10m0s for pods matching "app=mysql" in namespace "default" ...
helpers_test.go:353: "mysql-6bcdcbc558-tqhtx" [e827996c-23ea-4ded-9524-e87d98db94da] Pending / Ready:ContainersNotReady (containers with unready status: [mysql]) / ContainersReady:ContainersNotReady (containers with unready status: [mysql])
helpers_test.go:353: "mysql-6bcdcbc558-tqhtx" [e827996c-23ea-4ded-9524-e87d98db94da] Running
functional_test.go:1804: (dbg) TestFunctional/parallel/MySQL: app=mysql healthy within 32.00378383s
functional_test.go:1812: (dbg) Run:  kubectl --context functional-281340 exec mysql-6bcdcbc558-tqhtx -- mysql -ppassword -e "show databases;"
functional_test.go:1812: (dbg) Non-zero exit: kubectl --context functional-281340 exec mysql-6bcdcbc558-tqhtx -- mysql -ppassword -e "show databases;": exit status 1 (174.46699ms)

                                                
                                                
** stderr ** 
	mysql: [Warning] Using a password on the command line interface can be insecure.
	ERROR 2002 (HY000): Can't connect to local MySQL server through socket '/var/run/mysqld/mysqld.sock' (2)
	command terminated with exit code 1

                                                
                                                
** /stderr **
I1220 01:27:14.340633   13018 retry.go:31] will retry after 1.332338826s: exit status 1
functional_test.go:1812: (dbg) Run:  kubectl --context functional-281340 exec mysql-6bcdcbc558-tqhtx -- mysql -ppassword -e "show databases;"
functional_test.go:1812: (dbg) Non-zero exit: kubectl --context functional-281340 exec mysql-6bcdcbc558-tqhtx -- mysql -ppassword -e "show databases;": exit status 1 (222.166295ms)

                                                
                                                
** stderr ** 
	mysql: [Warning] Using a password on the command line interface can be insecure.
	ERROR 1045 (28000): Access denied for user 'root'@'localhost' (using password: YES)
	command terminated with exit code 1

                                                
                                                
** /stderr **
I1220 01:27:15.895703   13018 retry.go:31] will retry after 878.518617ms: exit status 1
functional_test.go:1812: (dbg) Run:  kubectl --context functional-281340 exec mysql-6bcdcbc558-tqhtx -- mysql -ppassword -e "show databases;"
functional_test.go:1812: (dbg) Non-zero exit: kubectl --context functional-281340 exec mysql-6bcdcbc558-tqhtx -- mysql -ppassword -e "show databases;": exit status 1 (400.768967ms)

                                                
                                                
** stderr ** 
	mysql: [Warning] Using a password on the command line interface can be insecure.
	ERROR 1045 (28000): Access denied for user 'root'@'localhost' (using password: YES)
	command terminated with exit code 1

                                                
                                                
** /stderr **
I1220 01:27:17.176166   13018 retry.go:31] will retry after 3.226983464s: exit status 1
functional_test.go:1812: (dbg) Run:  kubectl --context functional-281340 exec mysql-6bcdcbc558-tqhtx -- mysql -ppassword -e "show databases;"
functional_test.go:1812: (dbg) Non-zero exit: kubectl --context functional-281340 exec mysql-6bcdcbc558-tqhtx -- mysql -ppassword -e "show databases;": exit status 1 (221.173912ms)

                                                
                                                
** stderr ** 
	mysql: [Warning] Using a password on the command line interface can be insecure.
	ERROR 2002 (HY000): Can't connect to local MySQL server through socket '/var/run/mysqld/mysqld.sock' (2)
	command terminated with exit code 1

                                                
                                                
** /stderr **
I1220 01:27:20.624802   13018 retry.go:31] will retry after 4.816434097s: exit status 1
functional_test.go:1812: (dbg) Run:  kubectl --context functional-281340 exec mysql-6bcdcbc558-tqhtx -- mysql -ppassword -e "show databases;"
--- PASS: TestFunctional/parallel/MySQL (43.67s)

                                                
                                    
x
+
TestFunctional/parallel/FileSync (0.19s)

                                                
                                                
=== RUN   TestFunctional/parallel/FileSync
=== PAUSE TestFunctional/parallel/FileSync

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/FileSync
functional_test.go:1934: Checking for existence of /etc/test/nested/copy/13018/hosts within VM
functional_test.go:1936: (dbg) Run:  out/minikube-linux-amd64 -p functional-281340 ssh "sudo cat /etc/test/nested/copy/13018/hosts"
functional_test.go:1941: file sync test content: Test file for checking file sync process
--- PASS: TestFunctional/parallel/FileSync (0.19s)

                                                
                                    
x
+
TestFunctional/parallel/CertSync (1.19s)

                                                
                                                
=== RUN   TestFunctional/parallel/CertSync
=== PAUSE TestFunctional/parallel/CertSync

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/CertSync
functional_test.go:1977: Checking for existence of /etc/ssl/certs/13018.pem within VM
functional_test.go:1978: (dbg) Run:  out/minikube-linux-amd64 -p functional-281340 ssh "sudo cat /etc/ssl/certs/13018.pem"
functional_test.go:1977: Checking for existence of /usr/share/ca-certificates/13018.pem within VM
functional_test.go:1978: (dbg) Run:  out/minikube-linux-amd64 -p functional-281340 ssh "sudo cat /usr/share/ca-certificates/13018.pem"
functional_test.go:1977: Checking for existence of /etc/ssl/certs/51391683.0 within VM
functional_test.go:1978: (dbg) Run:  out/minikube-linux-amd64 -p functional-281340 ssh "sudo cat /etc/ssl/certs/51391683.0"
functional_test.go:2004: Checking for existence of /etc/ssl/certs/130182.pem within VM
functional_test.go:2005: (dbg) Run:  out/minikube-linux-amd64 -p functional-281340 ssh "sudo cat /etc/ssl/certs/130182.pem"
functional_test.go:2004: Checking for existence of /usr/share/ca-certificates/130182.pem within VM
functional_test.go:2005: (dbg) Run:  out/minikube-linux-amd64 -p functional-281340 ssh "sudo cat /usr/share/ca-certificates/130182.pem"
functional_test.go:2004: Checking for existence of /etc/ssl/certs/3ec20f2e.0 within VM
functional_test.go:2005: (dbg) Run:  out/minikube-linux-amd64 -p functional-281340 ssh "sudo cat /etc/ssl/certs/3ec20f2e.0"
--- PASS: TestFunctional/parallel/CertSync (1.19s)

                                                
                                    
x
+
TestFunctional/parallel/NodeLabels (0.08s)

                                                
                                                
=== RUN   TestFunctional/parallel/NodeLabels
=== PAUSE TestFunctional/parallel/NodeLabels

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/NodeLabels
functional_test.go:234: (dbg) Run:  kubectl --context functional-281340 get nodes --output=go-template "--template='{{range $k, $v := (index .items 0).metadata.labels}}{{$k}} {{end}}'"
--- PASS: TestFunctional/parallel/NodeLabels (0.08s)

                                                
                                    
x
+
TestFunctional/parallel/NonActiveRuntimeDisabled (0.2s)

                                                
                                                
=== RUN   TestFunctional/parallel/NonActiveRuntimeDisabled
=== PAUSE TestFunctional/parallel/NonActiveRuntimeDisabled

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/NonActiveRuntimeDisabled
functional_test.go:2032: (dbg) Run:  out/minikube-linux-amd64 -p functional-281340 ssh "sudo systemctl is-active crio"
functional_test.go:2032: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-281340 ssh "sudo systemctl is-active crio": exit status 1 (196.587562ms)

                                                
                                                
-- stdout --
	inactive

                                                
                                                
-- /stdout --
** stderr ** 
	ssh: Process exited with status 3

                                                
                                                
** /stderr **
--- PASS: TestFunctional/parallel/NonActiveRuntimeDisabled (0.20s)

                                                
                                    
x
+
TestFunctional/parallel/License (0.41s)

                                                
                                                
=== RUN   TestFunctional/parallel/License
=== PAUSE TestFunctional/parallel/License

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/License
functional_test.go:2293: (dbg) Run:  out/minikube-linux-amd64 license
--- PASS: TestFunctional/parallel/License (0.41s)

                                                
                                    
x
+
TestFunctional/parallel/ProfileCmd/profile_not_create (0.39s)

                                                
                                                
=== RUN   TestFunctional/parallel/ProfileCmd/profile_not_create
functional_test.go:1285: (dbg) Run:  out/minikube-linux-amd64 profile lis
functional_test.go:1290: (dbg) Run:  out/minikube-linux-amd64 profile list --output json
--- PASS: TestFunctional/parallel/ProfileCmd/profile_not_create (0.39s)

                                                
                                    
x
+
TestFunctional/parallel/DockerEnv/bash (0.79s)

                                                
                                                
=== RUN   TestFunctional/parallel/DockerEnv/bash
functional_test.go:514: (dbg) Run:  /bin/bash -c "eval $(out/minikube-linux-amd64 -p functional-281340 docker-env) && out/minikube-linux-amd64 status -p functional-281340"
functional_test.go:537: (dbg) Run:  /bin/bash -c "eval $(out/minikube-linux-amd64 -p functional-281340 docker-env) && docker images"
--- PASS: TestFunctional/parallel/DockerEnv/bash (0.79s)

                                                
                                    
x
+
TestFunctional/parallel/ProfileCmd/profile_list (0.36s)

                                                
                                                
=== RUN   TestFunctional/parallel/ProfileCmd/profile_list
functional_test.go:1325: (dbg) Run:  out/minikube-linux-amd64 profile list
functional_test.go:1330: Took "290.587896ms" to run "out/minikube-linux-amd64 profile list"
functional_test.go:1339: (dbg) Run:  out/minikube-linux-amd64 profile list -l
functional_test.go:1344: Took "71.033232ms" to run "out/minikube-linux-amd64 profile list -l"
--- PASS: TestFunctional/parallel/ProfileCmd/profile_list (0.36s)

                                                
                                    
x
+
TestFunctional/parallel/Version/short (0.06s)

                                                
                                                
=== RUN   TestFunctional/parallel/Version/short
=== PAUSE TestFunctional/parallel/Version/short

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/Version/short
functional_test.go:2261: (dbg) Run:  out/minikube-linux-amd64 -p functional-281340 version --short
--- PASS: TestFunctional/parallel/Version/short (0.06s)

                                                
                                    
x
+
TestFunctional/parallel/Version/components (0.88s)

                                                
                                                
=== RUN   TestFunctional/parallel/Version/components
=== PAUSE TestFunctional/parallel/Version/components

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/Version/components
functional_test.go:2275: (dbg) Run:  out/minikube-linux-amd64 -p functional-281340 version -o=json --components
--- PASS: TestFunctional/parallel/Version/components (0.88s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageListShort (0.2s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageListShort
=== PAUSE TestFunctional/parallel/ImageCommands/ImageListShort

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageListShort
functional_test.go:276: (dbg) Run:  out/minikube-linux-amd64 -p functional-281340 image ls --format short --alsologtostderr
functional_test.go:281: (dbg) Stdout: out/minikube-linux-amd64 -p functional-281340 image ls --format short --alsologtostderr:
registry.k8s.io/pause:latest
registry.k8s.io/pause:3.3
registry.k8s.io/pause:3.10.1
registry.k8s.io/pause:3.1
registry.k8s.io/kube-scheduler:v1.34.3
registry.k8s.io/kube-proxy:v1.34.3
registry.k8s.io/kube-controller-manager:v1.34.3
registry.k8s.io/kube-apiserver:v1.34.3
registry.k8s.io/etcd:3.6.5-0
registry.k8s.io/coredns/coredns:v1.12.1
public.ecr.aws/nginx/nginx:alpine
public.ecr.aws/docker/library/mysql:8.4
gcr.io/k8s-minikube/storage-provisioner:v5
gcr.io/k8s-minikube/busybox:1.28.4-glibc
docker.io/library/minikube-local-cache-test:functional-281340
docker.io/kubernetesui/metrics-scraper:<none>
docker.io/kubernetesui/dashboard:<none>
docker.io/kicbase/echo-server:latest
docker.io/kicbase/echo-server:functional-281340
functional_test.go:284: (dbg) Stderr: out/minikube-linux-amd64 -p functional-281340 image ls --format short --alsologtostderr:
I1220 01:27:17.205702   17506 out.go:360] Setting OutFile to fd 1 ...
I1220 01:27:17.205824   17506 out.go:408] TERM=,COLORTERM=, which probably does not support color
I1220 01:27:17.205838   17506 out.go:374] Setting ErrFile to fd 2...
I1220 01:27:17.205844   17506 out.go:408] TERM=,COLORTERM=, which probably does not support color
I1220 01:27:17.206050   17506 root.go:338] Updating PATH: /home/minitest/minikube-integration/7cd9f41b7421760cf1f1eaa8725bdb975037b06d-7160/.minikube/bin
I1220 01:27:17.206796   17506 config.go:182] Loaded profile config "functional-281340": Driver=kvm2, ContainerRuntime=docker, KubernetesVersion=v1.34.3
I1220 01:27:17.206931   17506 config.go:182] Loaded profile config "functional-281340": Driver=kvm2, ContainerRuntime=docker, KubernetesVersion=v1.34.3
I1220 01:27:17.209447   17506 ssh_runner.go:195] Run: systemctl --version
I1220 01:27:17.213358   17506 main.go:144] libmachine: domain functional-281340 has defined MAC address 52:54:00:25:96:cc in network mk-functional-281340
I1220 01:27:17.213872   17506 main.go:144] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:25:96:cc", ip: ""} in network mk-functional-281340: {Iface:virbr1 ExpiryTime:2025-12-20 02:23:26 +0000 UTC Type:0 Mac:52:54:00:25:96:cc Iaid: IPaddr:192.168.39.31 Prefix:24 Hostname:functional-281340 Clientid:01:52:54:00:25:96:cc}
I1220 01:27:17.213912   17506 main.go:144] libmachine: domain functional-281340 has defined IP address 192.168.39.31 and MAC address 52:54:00:25:96:cc in network mk-functional-281340
I1220 01:27:17.214139   17506 sshutil.go:53] new ssh client: &{IP:192.168.39.31 Port:22 SSHKeyPath:/home/minitest/minikube-integration/7cd9f41b7421760cf1f1eaa8725bdb975037b06d-7160/.minikube/machines/functional-281340/id_rsa Username:docker}
I1220 01:27:17.304800   17506 ssh_runner.go:195] Run: docker images --no-trunc --format "{{json .}}"
--- PASS: TestFunctional/parallel/ImageCommands/ImageListShort (0.20s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageListTable (0.24s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageListTable
=== PAUSE TestFunctional/parallel/ImageCommands/ImageListTable

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageListTable
functional_test.go:276: (dbg) Run:  out/minikube-linux-amd64 -p functional-281340 image ls --format table --alsologtostderr
functional_test.go:281: (dbg) Stdout: out/minikube-linux-amd64 -p functional-281340 image ls --format table --alsologtostderr:
┌─────────────────────────────────────────────┬───────────────────┬───────────────┬────────┐
│                    IMAGE                    │        TAG        │   IMAGE ID    │  SIZE  │
├─────────────────────────────────────────────┼───────────────────┼───────────────┼────────┤
│ registry.k8s.io/kube-proxy                  │ v1.34.3           │ 36eef8e07bdd6 │ 71.9MB │
│ docker.io/kubernetesui/dashboard            │ <none>            │ 07655ddf2eebe │ 246MB  │
│ gcr.io/k8s-minikube/busybox                 │ 1.28.4-glibc      │ 56cc512116c8f │ 4.4MB  │
│ registry.k8s.io/pause                       │ 3.1               │ da86e6ba6ca19 │ 742kB  │
│ docker.io/library/minikube-local-cache-test │ functional-281340 │ f2b4333361007 │ 30B    │
│ registry.k8s.io/etcd                        │ 3.6.5-0           │ a3e246e9556e9 │ 62.5MB │
│ registry.k8s.io/coredns/coredns             │ v1.12.1           │ 52546a367cc9e │ 75MB   │
│ localhost/my-image                          │ functional-281340 │ 62f069d036f1b │ 1.24MB │
│ public.ecr.aws/nginx/nginx                  │ alpine            │ 04da2b0513cd7 │ 53.7MB │
│ registry.k8s.io/kube-controller-manager     │ v1.34.3           │ 5826b25d990d7 │ 74.9MB │
│ public.ecr.aws/docker/library/mysql         │ 8.4               │ 20d0be4ee4524 │ 785MB  │
│ docker.io/kicbase/echo-server               │ functional-281340 │ 9056ab77afb8e │ 4.94MB │
│ docker.io/kicbase/echo-server               │ latest            │ 9056ab77afb8e │ 4.94MB │
│ docker.io/kubernetesui/metrics-scraper      │ <none>            │ 115053965e86b │ 43.8MB │
│ gcr.io/k8s-minikube/storage-provisioner     │ v5                │ 6e38f40d628db │ 31.5MB │
│ registry.k8s.io/pause                       │ 3.3               │ 0184c1613d929 │ 683kB  │
│ registry.k8s.io/kube-apiserver              │ v1.34.3           │ aa27095f56193 │ 88MB   │
│ registry.k8s.io/kube-scheduler              │ v1.34.3           │ aec12dadf56dd │ 52.8MB │
│ registry.k8s.io/pause                       │ 3.10.1            │ cd073f4c5f6a8 │ 736kB  │
│ registry.k8s.io/pause                       │ latest            │ 350b164e7ae1d │ 240kB  │
└─────────────────────────────────────────────┴───────────────────┴───────────────┴────────┘
functional_test.go:284: (dbg) Stderr: out/minikube-linux-amd64 -p functional-281340 image ls --format table --alsologtostderr:
I1220 01:27:20.454259   17624 out.go:360] Setting OutFile to fd 1 ...
I1220 01:27:20.454595   17624 out.go:408] TERM=,COLORTERM=, which probably does not support color
I1220 01:27:20.454602   17624 out.go:374] Setting ErrFile to fd 2...
I1220 01:27:20.454608   17624 out.go:408] TERM=,COLORTERM=, which probably does not support color
I1220 01:27:20.454913   17624 root.go:338] Updating PATH: /home/minitest/minikube-integration/7cd9f41b7421760cf1f1eaa8725bdb975037b06d-7160/.minikube/bin
I1220 01:27:20.456313   17624 config.go:182] Loaded profile config "functional-281340": Driver=kvm2, ContainerRuntime=docker, KubernetesVersion=v1.34.3
I1220 01:27:20.456499   17624 config.go:182] Loaded profile config "functional-281340": Driver=kvm2, ContainerRuntime=docker, KubernetesVersion=v1.34.3
I1220 01:27:20.459704   17624 ssh_runner.go:195] Run: systemctl --version
I1220 01:27:20.462359   17624 main.go:144] libmachine: domain functional-281340 has defined MAC address 52:54:00:25:96:cc in network mk-functional-281340
I1220 01:27:20.462928   17624 main.go:144] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:25:96:cc", ip: ""} in network mk-functional-281340: {Iface:virbr1 ExpiryTime:2025-12-20 02:23:26 +0000 UTC Type:0 Mac:52:54:00:25:96:cc Iaid: IPaddr:192.168.39.31 Prefix:24 Hostname:functional-281340 Clientid:01:52:54:00:25:96:cc}
I1220 01:27:20.462963   17624 main.go:144] libmachine: domain functional-281340 has defined IP address 192.168.39.31 and MAC address 52:54:00:25:96:cc in network mk-functional-281340
I1220 01:27:20.463565   17624 sshutil.go:53] new ssh client: &{IP:192.168.39.31 Port:22 SSHKeyPath:/home/minitest/minikube-integration/7cd9f41b7421760cf1f1eaa8725bdb975037b06d-7160/.minikube/machines/functional-281340/id_rsa Username:docker}
I1220 01:27:20.566921   17624 ssh_runner.go:195] Run: docker images --no-trunc --format "{{json .}}"
--- PASS: TestFunctional/parallel/ImageCommands/ImageListTable (0.24s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageListJson (0.24s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageListJson
=== PAUSE TestFunctional/parallel/ImageCommands/ImageListJson

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageListJson
functional_test.go:276: (dbg) Run:  out/minikube-linux-amd64 -p functional-281340 image ls --format json --alsologtostderr
functional_test.go:281: (dbg) Stdout: out/minikube-linux-amd64 -p functional-281340 image ls --format json --alsologtostderr:
[{"id":"04da2b0513cd78d8d29d60575cef80813c5496c15a801921e47efdf0feba39e5","repoDigests":[],"repoTags":["public.ecr.aws/nginx/nginx:alpine"],"size":"53700000"},{"id":"aa27095f5619377172f3d59289ccb2ba567ebea93a736d1705be068b2c030b0c","repoDigests":[],"repoTags":["registry.k8s.io/kube-apiserver:v1.34.3"],"size":"88000000"},{"id":"5826b25d990d7d314d236c8d128f43e443583891f5cdffa7bf8bca50ae9e0942","repoDigests":[],"repoTags":["registry.k8s.io/kube-controller-manager:v1.34.3"],"size":"74900000"},{"id":"36eef8e07bdd6abdc2bbf44041e49480fe499a3cedb0ae054b50daa1a35cf691","repoDigests":[],"repoTags":["registry.k8s.io/kube-proxy:v1.34.3"],"size":"71900000"},{"id":"9056ab77afb8e18e04303f11000a9d31b3f16b74c59475b899ae1b342d328d30","repoDigests":[],"repoTags":["docker.io/kicbase/echo-server:functional-281340","docker.io/kicbase/echo-server:latest"],"size":"4940000"},{"id":"0184c1613d92931126feb4c548e5da11015513b9e4c104e7305ee8b53b50a9da","repoDigests":[],"repoTags":["registry.k8s.io/pause:3.3"],"size":"683000"},{"id"
:"07655ddf2eebe5d250f7a72c25f638b27126805d61779741b4e62e69ba080558","repoDigests":[],"repoTags":["docker.io/kubernetesui/dashboard:\u003cnone\u003e"],"size":"246000000"},{"id":"115053965e86b2df4d78af78d7951b8644839d20a03820c6df59a261103315f7","repoDigests":[],"repoTags":["docker.io/kubernetesui/metrics-scraper:\u003cnone\u003e"],"size":"43800000"},{"id":"f2b433336100763c2c5a927e4197451113e42a52eaa88e97cad58e6c3ed9cea5","repoDigests":[],"repoTags":["docker.io/library/minikube-local-cache-test:functional-281340"],"size":"30"},{"id":"aec12dadf56dd45659a682b94571f115a1be02ee4a262b3b5176394f5c030c78","repoDigests":[],"repoTags":["registry.k8s.io/kube-scheduler:v1.34.3"],"size":"52800000"},{"id":"a3e246e9556e93d71e2850085ba581b376c76a9187b4b8a01c120f86579ef2b1","repoDigests":[],"repoTags":["registry.k8s.io/etcd:3.6.5-0"],"size":"62500000"},{"id":"6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562","repoDigests":[],"repoTags":["gcr.io/k8s-minikube/storage-provisioner:v5"],"size":"31500000"},{"id":"56cc
512116c8f894f11ce1995460aef1ee0972d48bc2a3bdb1faaac7c020289c","repoDigests":[],"repoTags":["gcr.io/k8s-minikube/busybox:1.28.4-glibc"],"size":"4400000"},{"id":"da86e6ba6ca197bf6bc5e9d900febd906b133eaa4750e6bed647b0fbe50ed43e","repoDigests":[],"repoTags":["registry.k8s.io/pause:3.1"],"size":"742000"},{"id":"20d0be4ee45242864913b12e7dc544f29f94117c9846c6a6b73d416670d42438","repoDigests":[],"repoTags":["public.ecr.aws/docker/library/mysql:8.4"],"size":"785000000"},{"id":"cd073f4c5f6a8e9dc6f3125ba00cf60819cae95c1ec84a1f146ee4a9cf9e803f","repoDigests":[],"repoTags":["registry.k8s.io/pause:3.10.1"],"size":"736000"},{"id":"350b164e7ae1dcddeffadd65c76226c9b6dc5553f5179153fb0e36b78f2a5e06","repoDigests":[],"repoTags":["registry.k8s.io/pause:latest"],"size":"240000"},{"id":"52546a367cc9e0d924aa3b190596a9167fa6e53245023b5b5baf0f07e5443969","repoDigests":[],"repoTags":["registry.k8s.io/coredns/coredns:v1.12.1"],"size":"75000000"},{"id":"62f069d036f1b7aa16eaf0564800d37ff12ec9c368f001a9e71f2f7126d42056","repoDigests":[],"r
epoTags":["localhost/my-image:functional-281340"],"size":"1240000"}]
functional_test.go:284: (dbg) Stderr: out/minikube-linux-amd64 -p functional-281340 image ls --format json --alsologtostderr:
I1220 01:27:20.219851   17612 out.go:360] Setting OutFile to fd 1 ...
I1220 01:27:20.220231   17612 out.go:408] TERM=,COLORTERM=, which probably does not support color
I1220 01:27:20.220256   17612 out.go:374] Setting ErrFile to fd 2...
I1220 01:27:20.220265   17612 out.go:408] TERM=,COLORTERM=, which probably does not support color
I1220 01:27:20.220695   17612 root.go:338] Updating PATH: /home/minitest/minikube-integration/7cd9f41b7421760cf1f1eaa8725bdb975037b06d-7160/.minikube/bin
I1220 01:27:20.221829   17612 config.go:182] Loaded profile config "functional-281340": Driver=kvm2, ContainerRuntime=docker, KubernetesVersion=v1.34.3
I1220 01:27:20.222025   17612 config.go:182] Loaded profile config "functional-281340": Driver=kvm2, ContainerRuntime=docker, KubernetesVersion=v1.34.3
I1220 01:27:20.225434   17612 ssh_runner.go:195] Run: systemctl --version
I1220 01:27:20.228271   17612 main.go:144] libmachine: domain functional-281340 has defined MAC address 52:54:00:25:96:cc in network mk-functional-281340
I1220 01:27:20.228858   17612 main.go:144] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:25:96:cc", ip: ""} in network mk-functional-281340: {Iface:virbr1 ExpiryTime:2025-12-20 02:23:26 +0000 UTC Type:0 Mac:52:54:00:25:96:cc Iaid: IPaddr:192.168.39.31 Prefix:24 Hostname:functional-281340 Clientid:01:52:54:00:25:96:cc}
I1220 01:27:20.228904   17612 main.go:144] libmachine: domain functional-281340 has defined IP address 192.168.39.31 and MAC address 52:54:00:25:96:cc in network mk-functional-281340
I1220 01:27:20.229153   17612 sshutil.go:53] new ssh client: &{IP:192.168.39.31 Port:22 SSHKeyPath:/home/minitest/minikube-integration/7cd9f41b7421760cf1f1eaa8725bdb975037b06d-7160/.minikube/machines/functional-281340/id_rsa Username:docker}
I1220 01:27:20.329360   17612 ssh_runner.go:195] Run: docker images --no-trunc --format "{{json .}}"
--- PASS: TestFunctional/parallel/ImageCommands/ImageListJson (0.24s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageListYaml (0.21s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageListYaml
=== PAUSE TestFunctional/parallel/ImageCommands/ImageListYaml

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageListYaml
functional_test.go:276: (dbg) Run:  out/minikube-linux-amd64 -p functional-281340 image ls --format yaml --alsologtostderr
functional_test.go:281: (dbg) Stdout: out/minikube-linux-amd64 -p functional-281340 image ls --format yaml --alsologtostderr:
- id: aa27095f5619377172f3d59289ccb2ba567ebea93a736d1705be068b2c030b0c
repoDigests: []
repoTags:
- registry.k8s.io/kube-apiserver:v1.34.3
size: "88000000"
- id: aec12dadf56dd45659a682b94571f115a1be02ee4a262b3b5176394f5c030c78
repoDigests: []
repoTags:
- registry.k8s.io/kube-scheduler:v1.34.3
size: "52800000"
- id: 5826b25d990d7d314d236c8d128f43e443583891f5cdffa7bf8bca50ae9e0942
repoDigests: []
repoTags:
- registry.k8s.io/kube-controller-manager:v1.34.3
size: "74900000"
- id: 20d0be4ee45242864913b12e7dc544f29f94117c9846c6a6b73d416670d42438
repoDigests: []
repoTags:
- public.ecr.aws/docker/library/mysql:8.4
size: "785000000"
- id: cd073f4c5f6a8e9dc6f3125ba00cf60819cae95c1ec84a1f146ee4a9cf9e803f
repoDigests: []
repoTags:
- registry.k8s.io/pause:3.10.1
size: "736000"
- id: 36eef8e07bdd6abdc2bbf44041e49480fe499a3cedb0ae054b50daa1a35cf691
repoDigests: []
repoTags:
- registry.k8s.io/kube-proxy:v1.34.3
size: "71900000"
- id: 07655ddf2eebe5d250f7a72c25f638b27126805d61779741b4e62e69ba080558
repoDigests: []
repoTags:
- docker.io/kubernetesui/dashboard:<none>
size: "246000000"
- id: 9056ab77afb8e18e04303f11000a9d31b3f16b74c59475b899ae1b342d328d30
repoDigests: []
repoTags:
- docker.io/kicbase/echo-server:functional-281340
- docker.io/kicbase/echo-server:latest
size: "4940000"
- id: 115053965e86b2df4d78af78d7951b8644839d20a03820c6df59a261103315f7
repoDigests: []
repoTags:
- docker.io/kubernetesui/metrics-scraper:<none>
size: "43800000"
- id: 04da2b0513cd78d8d29d60575cef80813c5496c15a801921e47efdf0feba39e5
repoDigests: []
repoTags:
- public.ecr.aws/nginx/nginx:alpine
size: "53700000"
- id: a3e246e9556e93d71e2850085ba581b376c76a9187b4b8a01c120f86579ef2b1
repoDigests: []
repoTags:
- registry.k8s.io/etcd:3.6.5-0
size: "62500000"
- id: 350b164e7ae1dcddeffadd65c76226c9b6dc5553f5179153fb0e36b78f2a5e06
repoDigests: []
repoTags:
- registry.k8s.io/pause:latest
size: "240000"
- id: 0184c1613d92931126feb4c548e5da11015513b9e4c104e7305ee8b53b50a9da
repoDigests: []
repoTags:
- registry.k8s.io/pause:3.3
size: "683000"
- id: 56cc512116c8f894f11ce1995460aef1ee0972d48bc2a3bdb1faaac7c020289c
repoDigests: []
repoTags:
- gcr.io/k8s-minikube/busybox:1.28.4-glibc
size: "4400000"
- id: da86e6ba6ca197bf6bc5e9d900febd906b133eaa4750e6bed647b0fbe50ed43e
repoDigests: []
repoTags:
- registry.k8s.io/pause:3.1
size: "742000"
- id: 52546a367cc9e0d924aa3b190596a9167fa6e53245023b5b5baf0f07e5443969
repoDigests: []
repoTags:
- registry.k8s.io/coredns/coredns:v1.12.1
size: "75000000"
- id: 6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562
repoDigests: []
repoTags:
- gcr.io/k8s-minikube/storage-provisioner:v5
size: "31500000"
- id: f2b433336100763c2c5a927e4197451113e42a52eaa88e97cad58e6c3ed9cea5
repoDigests: []
repoTags:
- docker.io/library/minikube-local-cache-test:functional-281340
size: "30"

                                                
                                                
functional_test.go:284: (dbg) Stderr: out/minikube-linux-amd64 -p functional-281340 image ls --format yaml --alsologtostderr:
I1220 01:27:17.408002   17516 out.go:360] Setting OutFile to fd 1 ...
I1220 01:27:17.408303   17516 out.go:408] TERM=,COLORTERM=, which probably does not support color
I1220 01:27:17.408313   17516 out.go:374] Setting ErrFile to fd 2...
I1220 01:27:17.408317   17516 out.go:408] TERM=,COLORTERM=, which probably does not support color
I1220 01:27:17.408508   17516 root.go:338] Updating PATH: /home/minitest/minikube-integration/7cd9f41b7421760cf1f1eaa8725bdb975037b06d-7160/.minikube/bin
I1220 01:27:17.409316   17516 config.go:182] Loaded profile config "functional-281340": Driver=kvm2, ContainerRuntime=docker, KubernetesVersion=v1.34.3
I1220 01:27:17.409472   17516 config.go:182] Loaded profile config "functional-281340": Driver=kvm2, ContainerRuntime=docker, KubernetesVersion=v1.34.3
I1220 01:27:17.412433   17516 ssh_runner.go:195] Run: systemctl --version
I1220 01:27:17.415381   17516 main.go:144] libmachine: domain functional-281340 has defined MAC address 52:54:00:25:96:cc in network mk-functional-281340
I1220 01:27:17.415849   17516 main.go:144] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:25:96:cc", ip: ""} in network mk-functional-281340: {Iface:virbr1 ExpiryTime:2025-12-20 02:23:26 +0000 UTC Type:0 Mac:52:54:00:25:96:cc Iaid: IPaddr:192.168.39.31 Prefix:24 Hostname:functional-281340 Clientid:01:52:54:00:25:96:cc}
I1220 01:27:17.415875   17516 main.go:144] libmachine: domain functional-281340 has defined IP address 192.168.39.31 and MAC address 52:54:00:25:96:cc in network mk-functional-281340
I1220 01:27:17.416100   17516 sshutil.go:53] new ssh client: &{IP:192.168.39.31 Port:22 SSHKeyPath:/home/minitest/minikube-integration/7cd9f41b7421760cf1f1eaa8725bdb975037b06d-7160/.minikube/machines/functional-281340/id_rsa Username:docker}
I1220 01:27:17.513727   17516 ssh_runner.go:195] Run: docker images --no-trunc --format "{{json .}}"
--- PASS: TestFunctional/parallel/ImageCommands/ImageListYaml (0.21s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageBuild (2.59s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageBuild
=== PAUSE TestFunctional/parallel/ImageCommands/ImageBuild

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageBuild
functional_test.go:323: (dbg) Run:  out/minikube-linux-amd64 -p functional-281340 ssh pgrep buildkitd
functional_test.go:323: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-281340 ssh pgrep buildkitd: exit status 1 (186.464919ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
functional_test.go:330: (dbg) Run:  out/minikube-linux-amd64 -p functional-281340 image build -t localhost/my-image:functional-281340 testdata/build --alsologtostderr
functional_test.go:330: (dbg) Done: out/minikube-linux-amd64 -p functional-281340 image build -t localhost/my-image:functional-281340 testdata/build --alsologtostderr: (2.225379561s)
functional_test.go:338: (dbg) Stderr: out/minikube-linux-amd64 -p functional-281340 image build -t localhost/my-image:functional-281340 testdata/build --alsologtostderr:
I1220 01:27:17.804451   17538 out.go:360] Setting OutFile to fd 1 ...
I1220 01:27:17.804762   17538 out.go:408] TERM=,COLORTERM=, which probably does not support color
I1220 01:27:17.804773   17538 out.go:374] Setting ErrFile to fd 2...
I1220 01:27:17.804778   17538 out.go:408] TERM=,COLORTERM=, which probably does not support color
I1220 01:27:17.804973   17538 root.go:338] Updating PATH: /home/minitest/minikube-integration/7cd9f41b7421760cf1f1eaa8725bdb975037b06d-7160/.minikube/bin
I1220 01:27:17.805679   17538 config.go:182] Loaded profile config "functional-281340": Driver=kvm2, ContainerRuntime=docker, KubernetesVersion=v1.34.3
I1220 01:27:17.806365   17538 config.go:182] Loaded profile config "functional-281340": Driver=kvm2, ContainerRuntime=docker, KubernetesVersion=v1.34.3
I1220 01:27:17.808785   17538 ssh_runner.go:195] Run: systemctl --version
I1220 01:27:17.811576   17538 main.go:144] libmachine: domain functional-281340 has defined MAC address 52:54:00:25:96:cc in network mk-functional-281340
I1220 01:27:17.812051   17538 main.go:144] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:25:96:cc", ip: ""} in network mk-functional-281340: {Iface:virbr1 ExpiryTime:2025-12-20 02:23:26 +0000 UTC Type:0 Mac:52:54:00:25:96:cc Iaid: IPaddr:192.168.39.31 Prefix:24 Hostname:functional-281340 Clientid:01:52:54:00:25:96:cc}
I1220 01:27:17.812082   17538 main.go:144] libmachine: domain functional-281340 has defined IP address 192.168.39.31 and MAC address 52:54:00:25:96:cc in network mk-functional-281340
I1220 01:27:17.812309   17538 sshutil.go:53] new ssh client: &{IP:192.168.39.31 Port:22 SSHKeyPath:/home/minitest/minikube-integration/7cd9f41b7421760cf1f1eaa8725bdb975037b06d-7160/.minikube/machines/functional-281340/id_rsa Username:docker}
I1220 01:27:17.903572   17538 build_images.go:162] Building image from path: /tmp/build.1323001149.tar
I1220 01:27:17.903657   17538 ssh_runner.go:195] Run: sudo mkdir -p /var/lib/minikube/build
I1220 01:27:17.926957   17538 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/build/build.1323001149.tar
I1220 01:27:17.933102   17538 ssh_runner.go:352] existence check for /var/lib/minikube/build/build.1323001149.tar: stat -c "%s %y" /var/lib/minikube/build/build.1323001149.tar: Process exited with status 1
stdout:

                                                
                                                
stderr:
stat: cannot statx '/var/lib/minikube/build/build.1323001149.tar': No such file or directory
I1220 01:27:17.933139   17538 ssh_runner.go:362] scp /tmp/build.1323001149.tar --> /var/lib/minikube/build/build.1323001149.tar (3072 bytes)
I1220 01:27:17.984786   17538 ssh_runner.go:195] Run: sudo mkdir -p /var/lib/minikube/build/build.1323001149
I1220 01:27:18.017652   17538 ssh_runner.go:195] Run: sudo tar -C /var/lib/minikube/build/build.1323001149 -xf /var/lib/minikube/build/build.1323001149.tar
I1220 01:27:18.045101   17538 docker.go:361] Building image: /var/lib/minikube/build/build.1323001149
I1220 01:27:18.045179   17538 ssh_runner.go:195] Run: docker build -t localhost/my-image:functional-281340 /var/lib/minikube/build/build.1323001149
#0 building with "default" instance using docker driver

                                                
                                                
#1 [internal] load build definition from Dockerfile
#1 transferring dockerfile: 97B 0.0s done
#1 DONE 0.1s

                                                
                                                
#2 [internal] load metadata for gcr.io/k8s-minikube/busybox:latest
#2 DONE 0.5s

                                                
                                                
#3 [internal] load .dockerignore
#3 transferring context: 2B done
#3 DONE 0.0s

                                                
                                                
#4 [internal] load build context
#4 transferring context: 62B done
#4 DONE 0.1s

                                                
                                                
#5 [1/3] FROM gcr.io/k8s-minikube/busybox:latest@sha256:ca5ae90100d50772da31f3b5016209e25ad61972404e2ccd83d44f10dee7e79b
#5 resolve gcr.io/k8s-minikube/busybox:latest@sha256:ca5ae90100d50772da31f3b5016209e25ad61972404e2ccd83d44f10dee7e79b 0.0s done
#5 sha256:62ffc2ed7554e4c6d360bce40bbcf196573dd27c4ce080641a2c59867e732dee 527B / 527B done
#5 sha256:beae173ccac6ad749f76713cf4440fe3d21d1043fe616dfbe30775815d1d0f6a 1.46kB / 1.46kB done
#5 sha256:5cc84ad355aaa64f46ea9c7bbcc319a9d808ab15088a27209c9e70ef86e5a2aa 0B / 772.79kB 0.2s
#5 sha256:ca5ae90100d50772da31f3b5016209e25ad61972404e2ccd83d44f10dee7e79b 770B / 770B done
#5 extracting sha256:5cc84ad355aaa64f46ea9c7bbcc319a9d808ab15088a27209c9e70ef86e5a2aa 0.1s done
#5 DONE 0.4s

                                                
                                                
#6 [2/3] RUN true
#6 DONE 0.2s

                                                
                                                
#7 [3/3] ADD content.txt /
#7 DONE 0.1s

                                                
                                                
#8 exporting to image
#8 exporting layers 0.1s done
#8 writing image sha256:62f069d036f1b7aa16eaf0564800d37ff12ec9c368f001a9e71f2f7126d42056
#8 writing image sha256:62f069d036f1b7aa16eaf0564800d37ff12ec9c368f001a9e71f2f7126d42056 done
#8 naming to localhost/my-image:functional-281340 0.0s done
#8 DONE 0.1s
I1220 01:27:19.932824   17538 ssh_runner.go:235] Completed: docker build -t localhost/my-image:functional-281340 /var/lib/minikube/build/build.1323001149: (1.887623683s)
I1220 01:27:19.932888   17538 ssh_runner.go:195] Run: sudo rm -rf /var/lib/minikube/build/build.1323001149
I1220 01:27:19.949540   17538 ssh_runner.go:195] Run: sudo rm -f /var/lib/minikube/build/build.1323001149.tar
I1220 01:27:19.962275   17538 build_images.go:218] Built localhost/my-image:functional-281340 from /tmp/build.1323001149.tar
I1220 01:27:19.962392   17538 build_images.go:134] succeeded building to: functional-281340
I1220 01:27:19.962410   17538 build_images.go:135] failed building to: 
functional_test.go:466: (dbg) Run:  out/minikube-linux-amd64 -p functional-281340 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageBuild (2.59s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/Setup (0.99s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/Setup
functional_test.go:357: (dbg) Run:  docker pull kicbase/echo-server:1.0
functional_test.go:362: (dbg) Run:  docker tag kicbase/echo-server:1.0 kicbase/echo-server:functional-281340
--- PASS: TestFunctional/parallel/ImageCommands/Setup (0.99s)

                                                
                                    
x
+
TestFunctional/parallel/ProfileCmd/profile_json_output (0.38s)

                                                
                                                
=== RUN   TestFunctional/parallel/ProfileCmd/profile_json_output
functional_test.go:1376: (dbg) Run:  out/minikube-linux-amd64 profile list -o json
functional_test.go:1381: Took "305.558519ms" to run "out/minikube-linux-amd64 profile list -o json"
functional_test.go:1389: (dbg) Run:  out/minikube-linux-amd64 profile list -o json --light
functional_test.go:1394: Took "68.915096ms" to run "out/minikube-linux-amd64 profile list -o json --light"
--- PASS: TestFunctional/parallel/ProfileCmd/profile_json_output (0.38s)

                                                
                                    
x
+
TestFunctional/parallel/MountCmd/any-port (5.99s)

                                                
                                                
=== RUN   TestFunctional/parallel/MountCmd/any-port
functional_test_mount_test.go:74: (dbg) daemon: [out/minikube-linux-amd64 mount -p functional-281340 /tmp/TestFunctionalparallelMountCmdany-port4022371456/001:/mount-9p --alsologtostderr -v=1]
functional_test_mount_test.go:108: wrote "test-1766193994407337232" to /tmp/TestFunctionalparallelMountCmdany-port4022371456/001/created-by-test
functional_test_mount_test.go:108: wrote "test-1766193994407337232" to /tmp/TestFunctionalparallelMountCmdany-port4022371456/001/created-by-test-removed-by-pod
functional_test_mount_test.go:108: wrote "test-1766193994407337232" to /tmp/TestFunctionalparallelMountCmdany-port4022371456/001/test-1766193994407337232
functional_test_mount_test.go:116: (dbg) Run:  out/minikube-linux-amd64 -p functional-281340 ssh "findmnt -T /mount-9p | grep 9p"
functional_test_mount_test.go:116: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-281340 ssh "findmnt -T /mount-9p | grep 9p": exit status 1 (183.796581ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
I1220 01:26:34.591490   13018 retry.go:31] will retry after 253.495584ms: exit status 1
functional_test_mount_test.go:116: (dbg) Run:  out/minikube-linux-amd64 -p functional-281340 ssh "findmnt -T /mount-9p | grep 9p"
functional_test_mount_test.go:130: (dbg) Run:  out/minikube-linux-amd64 -p functional-281340 ssh -- ls -la /mount-9p
functional_test_mount_test.go:134: guest mount directory contents
total 2
-rw-r--r-- 1 docker docker 24 Dec 20 01:26 created-by-test
-rw-r--r-- 1 docker docker 24 Dec 20 01:26 created-by-test-removed-by-pod
-rw-r--r-- 1 docker docker 24 Dec 20 01:26 test-1766193994407337232
functional_test_mount_test.go:138: (dbg) Run:  out/minikube-linux-amd64 -p functional-281340 ssh cat /mount-9p/test-1766193994407337232
functional_test_mount_test.go:149: (dbg) Run:  kubectl --context functional-281340 replace --force -f testdata/busybox-mount-test.yaml
functional_test_mount_test.go:154: (dbg) TestFunctional/parallel/MountCmd/any-port: waiting 4m0s for pods matching "integration-test=busybox-mount" in namespace "default" ...
helpers_test.go:353: "busybox-mount" [60fcabfb-7702-4637-adb4-1669f675c73f] Pending
helpers_test.go:353: "busybox-mount" [60fcabfb-7702-4637-adb4-1669f675c73f] Pending / Ready:ContainersNotReady (containers with unready status: [mount-munger]) / ContainersReady:ContainersNotReady (containers with unready status: [mount-munger])
helpers_test.go:353: "busybox-mount" [60fcabfb-7702-4637-adb4-1669f675c73f] Pending / Initialized:PodCompleted / Ready:PodCompleted / ContainersReady:PodCompleted
helpers_test.go:353: "busybox-mount" [60fcabfb-7702-4637-adb4-1669f675c73f] Succeeded / Initialized:PodCompleted / Ready:PodCompleted / ContainersReady:PodCompleted
functional_test_mount_test.go:154: (dbg) TestFunctional/parallel/MountCmd/any-port: integration-test=busybox-mount healthy within 4.007385476s
functional_test_mount_test.go:170: (dbg) Run:  kubectl --context functional-281340 logs busybox-mount
functional_test_mount_test.go:182: (dbg) Run:  out/minikube-linux-amd64 -p functional-281340 ssh stat /mount-9p/created-by-test
functional_test_mount_test.go:182: (dbg) Run:  out/minikube-linux-amd64 -p functional-281340 ssh stat /mount-9p/created-by-pod
functional_test_mount_test.go:91: (dbg) Run:  out/minikube-linux-amd64 -p functional-281340 ssh "sudo umount -f /mount-9p"
functional_test_mount_test.go:95: (dbg) stopping [out/minikube-linux-amd64 mount -p functional-281340 /tmp/TestFunctionalparallelMountCmdany-port4022371456/001:/mount-9p --alsologtostderr -v=1] ...
--- PASS: TestFunctional/parallel/MountCmd/any-port (5.99s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageLoadDaemon (0.89s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageLoadDaemon
functional_test.go:370: (dbg) Run:  out/minikube-linux-amd64 -p functional-281340 image load --daemon kicbase/echo-server:functional-281340 --alsologtostderr
functional_test.go:466: (dbg) Run:  out/minikube-linux-amd64 -p functional-281340 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageLoadDaemon (0.89s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageReloadDaemon (0.88s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageReloadDaemon
functional_test.go:380: (dbg) Run:  out/minikube-linux-amd64 -p functional-281340 image load --daemon kicbase/echo-server:functional-281340 --alsologtostderr
functional_test.go:466: (dbg) Run:  out/minikube-linux-amd64 -p functional-281340 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageReloadDaemon (0.88s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageTagAndLoadDaemon (1.45s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageTagAndLoadDaemon
functional_test.go:250: (dbg) Run:  docker pull kicbase/echo-server:latest
functional_test.go:255: (dbg) Run:  docker tag kicbase/echo-server:latest kicbase/echo-server:functional-281340
functional_test.go:260: (dbg) Run:  out/minikube-linux-amd64 -p functional-281340 image load --daemon kicbase/echo-server:functional-281340 --alsologtostderr
functional_test.go:466: (dbg) Run:  out/minikube-linux-amd64 -p functional-281340 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageTagAndLoadDaemon (1.45s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageSaveToFile (0.31s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageSaveToFile
functional_test.go:395: (dbg) Run:  out/minikube-linux-amd64 -p functional-281340 image save kicbase/echo-server:functional-281340 /home/minitest/minikube/echo-server-save.tar --alsologtostderr
--- PASS: TestFunctional/parallel/ImageCommands/ImageSaveToFile (0.31s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageRemove (0.39s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageRemove
functional_test.go:407: (dbg) Run:  out/minikube-linux-amd64 -p functional-281340 image rm kicbase/echo-server:functional-281340 --alsologtostderr
functional_test.go:466: (dbg) Run:  out/minikube-linux-amd64 -p functional-281340 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageRemove (0.39s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageLoadFromFile (0.7s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageLoadFromFile
functional_test.go:424: (dbg) Run:  out/minikube-linux-amd64 -p functional-281340 image load /home/minitest/minikube/echo-server-save.tar --alsologtostderr
functional_test.go:466: (dbg) Run:  out/minikube-linux-amd64 -p functional-281340 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageLoadFromFile (0.70s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageSaveDaemon (0.9s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageSaveDaemon
functional_test.go:434: (dbg) Run:  docker rmi kicbase/echo-server:functional-281340
functional_test.go:439: (dbg) Run:  out/minikube-linux-amd64 -p functional-281340 image save --daemon kicbase/echo-server:functional-281340 --alsologtostderr
functional_test.go:447: (dbg) Run:  docker image inspect kicbase/echo-server:functional-281340
--- PASS: TestFunctional/parallel/ImageCommands/ImageSaveDaemon (0.90s)

                                                
                                    
x
+
TestFunctional/parallel/MountCmd/specific-port (1.56s)

                                                
                                                
=== RUN   TestFunctional/parallel/MountCmd/specific-port
functional_test_mount_test.go:219: (dbg) daemon: [out/minikube-linux-amd64 mount -p functional-281340 /tmp/TestFunctionalparallelMountCmdspecific-port2098226976/001:/mount-9p --alsologtostderr -v=1 --port 37155]
functional_test_mount_test.go:249: (dbg) Run:  out/minikube-linux-amd64 -p functional-281340 ssh "findmnt -T /mount-9p | grep 9p"
functional_test_mount_test.go:249: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-281340 ssh "findmnt -T /mount-9p | grep 9p": exit status 1 (187.393846ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
I1220 01:26:40.588752   13018 retry.go:31] will retry after 565.987239ms: exit status 1
functional_test_mount_test.go:249: (dbg) Run:  out/minikube-linux-amd64 -p functional-281340 ssh "findmnt -T /mount-9p | grep 9p"
functional_test_mount_test.go:263: (dbg) Run:  out/minikube-linux-amd64 -p functional-281340 ssh -- ls -la /mount-9p
functional_test_mount_test.go:267: guest mount directory contents
total 0
functional_test_mount_test.go:269: (dbg) stopping [out/minikube-linux-amd64 mount -p functional-281340 /tmp/TestFunctionalparallelMountCmdspecific-port2098226976/001:/mount-9p --alsologtostderr -v=1 --port 37155] ...
functional_test_mount_test.go:270: reading mount text
functional_test_mount_test.go:284: done reading mount text
functional_test_mount_test.go:236: (dbg) Run:  out/minikube-linux-amd64 -p functional-281340 ssh "sudo umount -f /mount-9p"
functional_test_mount_test.go:236: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-281340 ssh "sudo umount -f /mount-9p": exit status 1 (185.283805ms)

                                                
                                                
-- stdout --
	umount: /mount-9p: not mounted.

                                                
                                                
-- /stdout --
** stderr ** 
	ssh: Process exited with status 32

                                                
                                                
** /stderr **
functional_test_mount_test.go:238: "out/minikube-linux-amd64 -p functional-281340 ssh \"sudo umount -f /mount-9p\"": exit status 1
functional_test_mount_test.go:240: (dbg) stopping [out/minikube-linux-amd64 mount -p functional-281340 /tmp/TestFunctionalparallelMountCmdspecific-port2098226976/001:/mount-9p --alsologtostderr -v=1 --port 37155] ...
--- PASS: TestFunctional/parallel/MountCmd/specific-port (1.56s)

                                                
                                    
x
+
TestFunctional/parallel/UpdateContextCmd/no_changes (0.08s)

                                                
                                                
=== RUN   TestFunctional/parallel/UpdateContextCmd/no_changes
=== PAUSE TestFunctional/parallel/UpdateContextCmd/no_changes

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/UpdateContextCmd/no_changes
functional_test.go:2124: (dbg) Run:  out/minikube-linux-amd64 -p functional-281340 update-context --alsologtostderr -v=2
--- PASS: TestFunctional/parallel/UpdateContextCmd/no_changes (0.08s)

                                                
                                    
x
+
TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster (0.08s)

                                                
                                                
=== RUN   TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster
=== PAUSE TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster
functional_test.go:2124: (dbg) Run:  out/minikube-linux-amd64 -p functional-281340 update-context --alsologtostderr -v=2
--- PASS: TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster (0.08s)

                                                
                                    
x
+
TestFunctional/parallel/UpdateContextCmd/no_clusters (0.08s)

                                                
                                                
=== RUN   TestFunctional/parallel/UpdateContextCmd/no_clusters
=== PAUSE TestFunctional/parallel/UpdateContextCmd/no_clusters

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/UpdateContextCmd/no_clusters
functional_test.go:2124: (dbg) Run:  out/minikube-linux-amd64 -p functional-281340 update-context --alsologtostderr -v=2
--- PASS: TestFunctional/parallel/UpdateContextCmd/no_clusters (0.08s)

                                                
                                    
x
+
TestFunctional/parallel/MountCmd/VerifyCleanup (1.46s)

                                                
                                                
=== RUN   TestFunctional/parallel/MountCmd/VerifyCleanup
functional_test_mount_test.go:304: (dbg) daemon: [out/minikube-linux-amd64 mount -p functional-281340 /tmp/TestFunctionalparallelMountCmdVerifyCleanup4141906898/001:/mount1 --alsologtostderr -v=1]
functional_test_mount_test.go:304: (dbg) daemon: [out/minikube-linux-amd64 mount -p functional-281340 /tmp/TestFunctionalparallelMountCmdVerifyCleanup4141906898/001:/mount2 --alsologtostderr -v=1]
functional_test_mount_test.go:304: (dbg) daemon: [out/minikube-linux-amd64 mount -p functional-281340 /tmp/TestFunctionalparallelMountCmdVerifyCleanup4141906898/001:/mount3 --alsologtostderr -v=1]
functional_test_mount_test.go:331: (dbg) Run:  out/minikube-linux-amd64 -p functional-281340 ssh "findmnt -T" /mount1
functional_test_mount_test.go:331: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-281340 ssh "findmnt -T" /mount1: exit status 1 (213.296165ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
I1220 01:26:42.177772   13018 retry.go:31] will retry after 559.673653ms: exit status 1
functional_test_mount_test.go:331: (dbg) Run:  out/minikube-linux-amd64 -p functional-281340 ssh "findmnt -T" /mount1
functional_test_mount_test.go:331: (dbg) Run:  out/minikube-linux-amd64 -p functional-281340 ssh "findmnt -T" /mount2
I1220 01:26:43.096221   13018 detect.go:223] nested VM detected
functional_test_mount_test.go:331: (dbg) Run:  out/minikube-linux-amd64 -p functional-281340 ssh "findmnt -T" /mount3
functional_test_mount_test.go:376: (dbg) Run:  out/minikube-linux-amd64 mount -p functional-281340 --kill=true
functional_test_mount_test.go:319: (dbg) stopping [out/minikube-linux-amd64 mount -p functional-281340 /tmp/TestFunctionalparallelMountCmdVerifyCleanup4141906898/001:/mount1 --alsologtostderr -v=1] ...
helpers_test.go:508: unable to find parent, assuming dead: process does not exist
functional_test_mount_test.go:319: (dbg) stopping [out/minikube-linux-amd64 mount -p functional-281340 /tmp/TestFunctionalparallelMountCmdVerifyCleanup4141906898/001:/mount2 --alsologtostderr -v=1] ...
helpers_test.go:508: unable to find parent, assuming dead: process does not exist
functional_test_mount_test.go:319: (dbg) stopping [out/minikube-linux-amd64 mount -p functional-281340 /tmp/TestFunctionalparallelMountCmdVerifyCleanup4141906898/001:/mount3 --alsologtostderr -v=1] ...
helpers_test.go:508: unable to find parent, assuming dead: process does not exist
--- PASS: TestFunctional/parallel/MountCmd/VerifyCleanup (1.46s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/DeployApp (26.39s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/DeployApp
functional_test.go:1451: (dbg) Run:  kubectl --context functional-281340 create deployment hello-node --image kicbase/echo-server
functional_test.go:1455: (dbg) Run:  kubectl --context functional-281340 expose deployment hello-node --type=NodePort --port=8080
functional_test.go:1460: (dbg) TestFunctional/parallel/ServiceCmd/DeployApp: waiting 10m0s for pods matching "app=hello-node" in namespace "default" ...
helpers_test.go:353: "hello-node-75c85bcc94-d6w8b" [b91b6112-b067-4e20-a0e9-bd0769968352] Pending / Ready:ContainersNotReady (containers with unready status: [echo-server]) / ContainersReady:ContainersNotReady (containers with unready status: [echo-server])
helpers_test.go:353: "hello-node-75c85bcc94-d6w8b" [b91b6112-b067-4e20-a0e9-bd0769968352] Running
functional_test.go:1460: (dbg) TestFunctional/parallel/ServiceCmd/DeployApp: app=hello-node healthy within 26.039565532s
--- PASS: TestFunctional/parallel/ServiceCmd/DeployApp (26.39s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/List (1.36s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/List
functional_test.go:1469: (dbg) Run:  out/minikube-linux-amd64 -p functional-281340 service list
functional_test.go:1469: (dbg) Done: out/minikube-linux-amd64 -p functional-281340 service list: (1.356376017s)
--- PASS: TestFunctional/parallel/ServiceCmd/List (1.36s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/JSONOutput (1.27s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/JSONOutput
functional_test.go:1499: (dbg) Run:  out/minikube-linux-amd64 -p functional-281340 service list -o json
I1220 01:27:18.280281   13018 detect.go:223] nested VM detected
functional_test.go:1499: (dbg) Done: out/minikube-linux-amd64 -p functional-281340 service list -o json: (1.273506318s)
functional_test.go:1504: Took "1.273610922s" to run "out/minikube-linux-amd64 -p functional-281340 service list -o json"
--- PASS: TestFunctional/parallel/ServiceCmd/JSONOutput (1.27s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/HTTPS (0.33s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/HTTPS
functional_test.go:1519: (dbg) Run:  out/minikube-linux-amd64 -p functional-281340 service --namespace=default --https --url hello-node
functional_test.go:1532: found endpoint: https://192.168.39.31:31499
--- PASS: TestFunctional/parallel/ServiceCmd/HTTPS (0.33s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/Format (0.26s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/Format
functional_test.go:1550: (dbg) Run:  out/minikube-linux-amd64 -p functional-281340 service hello-node --url --format={{.IP}}
--- PASS: TestFunctional/parallel/ServiceCmd/Format (0.26s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/URL (0.34s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/URL
functional_test.go:1569: (dbg) Run:  out/minikube-linux-amd64 -p functional-281340 service hello-node --url
functional_test.go:1575: found endpoint for hello-node: http://192.168.39.31:31499
--- PASS: TestFunctional/parallel/ServiceCmd/URL (0.34s)

                                                
                                    
x
+
TestFunctional/delete_echo-server_images (0.04s)

                                                
                                                
=== RUN   TestFunctional/delete_echo-server_images
functional_test.go:205: (dbg) Run:  docker rmi -f kicbase/echo-server:1.0
functional_test.go:205: (dbg) Run:  docker rmi -f kicbase/echo-server:functional-281340
--- PASS: TestFunctional/delete_echo-server_images (0.04s)

                                                
                                    
x
+
TestFunctional/delete_my-image_image (0.02s)

                                                
                                                
=== RUN   TestFunctional/delete_my-image_image
functional_test.go:213: (dbg) Run:  docker rmi -f localhost/my-image:functional-281340
--- PASS: TestFunctional/delete_my-image_image (0.02s)

                                                
                                    
x
+
TestFunctional/delete_minikube_cached_images (0.02s)

                                                
                                                
=== RUN   TestFunctional/delete_minikube_cached_images
functional_test.go:221: (dbg) Run:  docker rmi -f minikube-local-cache-test:functional-281340
--- PASS: TestFunctional/delete_minikube_cached_images (0.02s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/serial/CopySyncFile (0s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/serial/CopySyncFile
functional_test.go:1860: local sync path: /home/minitest/minikube-integration/7cd9f41b7421760cf1f1eaa8725bdb975037b06d-7160/.minikube/files/etc/test/nested/copy/13018/hosts
--- PASS: TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/serial/CopySyncFile (0.00s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/serial/StartWithProxy (53.05s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/serial/StartWithProxy
functional_test.go:2239: (dbg) Run:  out/minikube-linux-amd64 start -p functional-562123 --memory=4096 --apiserver-port=8441 --wait=all --driver=kvm2  --container-runtime=docker --kubernetes-version=v1.35.0-rc.1
E1220 01:27:41.400339   13018 cert_rotation.go:172] "Loading client cert failed" err="open /home/minitest/minikube-integration/7cd9f41b7421760cf1f1eaa8725bdb975037b06d-7160/.minikube/profiles/addons-616728/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
functional_test.go:2239: (dbg) Done: out/minikube-linux-amd64 start -p functional-562123 --memory=4096 --apiserver-port=8441 --wait=all --driver=kvm2  --container-runtime=docker --kubernetes-version=v1.35.0-rc.1: (53.04724675s)
--- PASS: TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/serial/StartWithProxy (53.05s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/serial/AuditLog (0s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/serial/AuditLog
--- PASS: TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/serial/AuditLog (0.00s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/serial/SoftStart (54.93s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/serial/SoftStart
I1220 01:28:20.519436   13018 config.go:182] Loaded profile config "functional-562123": Driver=kvm2, ContainerRuntime=docker, KubernetesVersion=v1.35.0-rc.1
functional_test.go:674: (dbg) Run:  out/minikube-linux-amd64 start -p functional-562123 --alsologtostderr -v=8
functional_test.go:674: (dbg) Done: out/minikube-linux-amd64 start -p functional-562123 --alsologtostderr -v=8: (54.926812406s)
functional_test.go:678: soft start took 54.927221198s for "functional-562123" cluster.
I1220 01:29:15.446628   13018 config.go:182] Loaded profile config "functional-562123": Driver=kvm2, ContainerRuntime=docker, KubernetesVersion=v1.35.0-rc.1
--- PASS: TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/serial/SoftStart (54.93s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/serial/KubeContext (0.04s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/serial/KubeContext
functional_test.go:696: (dbg) Run:  kubectl config current-context
--- PASS: TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/serial/KubeContext (0.04s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/serial/KubectlGetPods (0.07s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/serial/KubectlGetPods
functional_test.go:711: (dbg) Run:  kubectl --context functional-562123 get po -A
--- PASS: TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/serial/KubectlGetPods (0.07s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/serial/CacheCmd/cache/add_remote (2.08s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/serial/CacheCmd/cache/add_remote
functional_test.go:1064: (dbg) Run:  out/minikube-linux-amd64 -p functional-562123 cache add registry.k8s.io/pause:3.1
functional_test.go:1064: (dbg) Run:  out/minikube-linux-amd64 -p functional-562123 cache add registry.k8s.io/pause:3.3
functional_test.go:1064: (dbg) Run:  out/minikube-linux-amd64 -p functional-562123 cache add registry.k8s.io/pause:latest
--- PASS: TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/serial/CacheCmd/cache/add_remote (2.08s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/serial/CacheCmd/cache/add_local (1.03s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/serial/CacheCmd/cache/add_local
functional_test.go:1092: (dbg) Run:  docker build -t minikube-local-cache-test:functional-562123 /tmp/TestFunctionalNewestKubernetesVersionv1.35.0-rc.1serialCacheC1842372804/001
functional_test.go:1104: (dbg) Run:  out/minikube-linux-amd64 -p functional-562123 cache add minikube-local-cache-test:functional-562123
functional_test.go:1109: (dbg) Run:  out/minikube-linux-amd64 -p functional-562123 cache delete minikube-local-cache-test:functional-562123
functional_test.go:1098: (dbg) Run:  docker rmi minikube-local-cache-test:functional-562123
--- PASS: TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/serial/CacheCmd/cache/add_local (1.03s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/serial/CacheCmd/cache/CacheDelete (0.06s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/serial/CacheCmd/cache/CacheDelete
functional_test.go:1117: (dbg) Run:  out/minikube-linux-amd64 cache delete registry.k8s.io/pause:3.3
--- PASS: TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/serial/CacheCmd/cache/CacheDelete (0.06s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/serial/CacheCmd/cache/list (0.06s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/serial/CacheCmd/cache/list
functional_test.go:1125: (dbg) Run:  out/minikube-linux-amd64 cache list
--- PASS: TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/serial/CacheCmd/cache/list (0.06s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/serial/CacheCmd/cache/verify_cache_inside_node (0.18s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/serial/CacheCmd/cache/verify_cache_inside_node
functional_test.go:1139: (dbg) Run:  out/minikube-linux-amd64 -p functional-562123 ssh sudo crictl images
--- PASS: TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/serial/CacheCmd/cache/verify_cache_inside_node (0.18s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/serial/CacheCmd/cache/cache_reload (1.19s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/serial/CacheCmd/cache/cache_reload
functional_test.go:1162: (dbg) Run:  out/minikube-linux-amd64 -p functional-562123 ssh sudo docker rmi registry.k8s.io/pause:latest
functional_test.go:1168: (dbg) Run:  out/minikube-linux-amd64 -p functional-562123 ssh sudo crictl inspecti registry.k8s.io/pause:latest
functional_test.go:1168: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-562123 ssh sudo crictl inspecti registry.k8s.io/pause:latest: exit status 1 (182.789698ms)

                                                
                                                
-- stdout --
	FATA[0000] no such image "registry.k8s.io/pause:latest" present 

                                                
                                                
-- /stdout --
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
functional_test.go:1173: (dbg) Run:  out/minikube-linux-amd64 -p functional-562123 cache reload
functional_test.go:1178: (dbg) Run:  out/minikube-linux-amd64 -p functional-562123 ssh sudo crictl inspecti registry.k8s.io/pause:latest
--- PASS: TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/serial/CacheCmd/cache/cache_reload (1.19s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/serial/CacheCmd/cache/delete (0.12s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/serial/CacheCmd/cache/delete
functional_test.go:1187: (dbg) Run:  out/minikube-linux-amd64 cache delete registry.k8s.io/pause:3.1
functional_test.go:1187: (dbg) Run:  out/minikube-linux-amd64 cache delete registry.k8s.io/pause:latest
--- PASS: TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/serial/CacheCmd/cache/delete (0.12s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/serial/MinikubeKubectlCmd (0.12s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/serial/MinikubeKubectlCmd
functional_test.go:731: (dbg) Run:  out/minikube-linux-amd64 -p functional-562123 kubectl -- --context functional-562123 get pods
--- PASS: TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/serial/MinikubeKubectlCmd (0.12s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/serial/MinikubeKubectlCmdDirectly (0.11s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/serial/MinikubeKubectlCmdDirectly
functional_test.go:756: (dbg) Run:  out/kubectl --context functional-562123 get pods
--- PASS: TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/serial/MinikubeKubectlCmdDirectly (0.11s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/serial/ExtraConfig (52.27s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/serial/ExtraConfig
functional_test.go:772: (dbg) Run:  out/minikube-linux-amd64 start -p functional-562123 --extra-config=apiserver.enable-admission-plugins=NamespaceAutoProvision --wait=all
E1220 01:29:57.553840   13018 cert_rotation.go:172] "Loading client cert failed" err="open /home/minitest/minikube-integration/7cd9f41b7421760cf1f1eaa8725bdb975037b06d-7160/.minikube/profiles/addons-616728/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
functional_test.go:772: (dbg) Done: out/minikube-linux-amd64 start -p functional-562123 --extra-config=apiserver.enable-admission-plugins=NamespaceAutoProvision --wait=all: (52.270048041s)
functional_test.go:776: restart took 52.270141535s for "functional-562123" cluster.
I1220 01:30:12.774167   13018 config.go:182] Loaded profile config "functional-562123": Driver=kvm2, ContainerRuntime=docker, KubernetesVersion=v1.35.0-rc.1
--- PASS: TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/serial/ExtraConfig (52.27s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/serial/ComponentHealth (0.06s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/serial/ComponentHealth
functional_test.go:825: (dbg) Run:  kubectl --context functional-562123 get po -l tier=control-plane -n kube-system -o=json
functional_test.go:840: etcd phase: Running
functional_test.go:850: etcd status: Ready
functional_test.go:840: kube-apiserver phase: Running
functional_test.go:850: kube-apiserver status: Ready
functional_test.go:840: kube-controller-manager phase: Running
functional_test.go:850: kube-controller-manager status: Ready
functional_test.go:840: kube-scheduler phase: Running
functional_test.go:850: kube-scheduler status: Ready
--- PASS: TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/serial/ComponentHealth (0.06s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/serial/LogsCmd (0.94s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/serial/LogsCmd
functional_test.go:1251: (dbg) Run:  out/minikube-linux-amd64 -p functional-562123 logs
--- PASS: TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/serial/LogsCmd (0.94s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/serial/LogsFileCmd (0.91s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/serial/LogsFileCmd
functional_test.go:1265: (dbg) Run:  out/minikube-linux-amd64 -p functional-562123 logs --file /tmp/TestFunctionalNewestKubernetesVersionv1.35.0-rc.1serialLogsFi1097315175/001/logs.txt
--- PASS: TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/serial/LogsFileCmd (0.91s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/serial/InvalidService (4.01s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/serial/InvalidService
functional_test.go:2326: (dbg) Run:  kubectl --context functional-562123 apply -f testdata/invalidsvc.yaml
functional_test.go:2340: (dbg) Run:  out/minikube-linux-amd64 service invalid-svc -p functional-562123
functional_test.go:2340: (dbg) Non-zero exit: out/minikube-linux-amd64 service invalid-svc -p functional-562123: exit status 115 (221.689098ms)

                                                
                                                
-- stdout --
	┌───────────┬─────────────┬─────────────┬────────────────────────────┐
	│ NAMESPACE │    NAME     │ TARGET PORT │            URL             │
	├───────────┼─────────────┼─────────────┼────────────────────────────┤
	│ default   │ invalid-svc │ 80          │ http://192.168.39.29:31551 │
	└───────────┴─────────────┴─────────────┴────────────────────────────┘
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to SVC_UNREACHABLE: service not available: no running pod for service invalid-svc found
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_service_1a9e407b3012cd2729ac720152316fb3398a8e6b_0.log                 │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
functional_test.go:2332: (dbg) Run:  kubectl --context functional-562123 delete -f testdata/invalidsvc.yaml
--- PASS: TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/serial/InvalidService (4.01s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/ConfigCmd (0.44s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/ConfigCmd
=== PAUSE TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/ConfigCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/ConfigCmd
functional_test.go:1214: (dbg) Run:  out/minikube-linux-amd64 -p functional-562123 config unset cpus
functional_test.go:1214: (dbg) Run:  out/minikube-linux-amd64 -p functional-562123 config get cpus
functional_test.go:1214: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-562123 config get cpus: exit status 14 (68.34973ms)

                                                
                                                
** stderr ** 
	Error: specified key could not be found in config

                                                
                                                
** /stderr **
functional_test.go:1214: (dbg) Run:  out/minikube-linux-amd64 -p functional-562123 config set cpus 2
functional_test.go:1214: (dbg) Run:  out/minikube-linux-amd64 -p functional-562123 config get cpus
functional_test.go:1214: (dbg) Run:  out/minikube-linux-amd64 -p functional-562123 config unset cpus
functional_test.go:1214: (dbg) Run:  out/minikube-linux-amd64 -p functional-562123 config get cpus
functional_test.go:1214: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-562123 config get cpus: exit status 14 (71.840826ms)

                                                
                                                
** stderr ** 
	Error: specified key could not be found in config

                                                
                                                
** /stderr **
--- PASS: TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/ConfigCmd (0.44s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/DashboardCmd (33.22s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/DashboardCmd
=== PAUSE TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/DashboardCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/DashboardCmd
functional_test.go:920: (dbg) daemon: [out/minikube-linux-amd64 dashboard --url --port 36195 -p functional-562123 --alsologtostderr -v=1]
functional_test.go:925: (dbg) stopping [out/minikube-linux-amd64 dashboard --url --port 36195 -p functional-562123 --alsologtostderr -v=1] ...
helpers_test.go:526: unable to kill pid 19569: os: process already finished
--- PASS: TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/DashboardCmd (33.22s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/DryRun (0.21s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/DryRun
=== PAUSE TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/DryRun

                                                
                                                

                                                
                                                
=== CONT  TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/DryRun
functional_test.go:989: (dbg) Run:  out/minikube-linux-amd64 start -p functional-562123 --dry-run --memory 250MB --alsologtostderr --driver=kvm2  --container-runtime=docker --kubernetes-version=v1.35.0-rc.1
functional_test.go:989: (dbg) Non-zero exit: out/minikube-linux-amd64 start -p functional-562123 --dry-run --memory 250MB --alsologtostderr --driver=kvm2  --container-runtime=docker --kubernetes-version=v1.35.0-rc.1: exit status 23 (108.204654ms)

                                                
                                                
-- stdout --
	* [functional-562123] minikube v1.37.0 on Ubuntu 24.04 (kvm/amd64)
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/minitest/minikube-integration/7cd9f41b7421760cf1f1eaa8725bdb975037b06d-7160/kubeconfig
	  - MINIKUBE_HOME=/home/minitest/minikube-integration/7cd9f41b7421760cf1f1eaa8725bdb975037b06d-7160/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-amd64
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the kvm2 driver based on existing profile
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1220 01:30:28.276760   19360 out.go:360] Setting OutFile to fd 1 ...
	I1220 01:30:28.276989   19360 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1220 01:30:28.276998   19360 out.go:374] Setting ErrFile to fd 2...
	I1220 01:30:28.277002   19360 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1220 01:30:28.277174   19360 root.go:338] Updating PATH: /home/minitest/minikube-integration/7cd9f41b7421760cf1f1eaa8725bdb975037b06d-7160/.minikube/bin
	I1220 01:30:28.277617   19360 out.go:368] Setting JSON to false
	I1220 01:30:28.278436   19360 start.go:133] hostinfo: {"hostname":"minitest-vm-9d09530a.c.k8s-infra-e2e-boskos-103.internal","uptime":941,"bootTime":1766193287,"procs":184,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"24.04","kernelVersion":"6.14.0-1021-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"324b1d65-3a78-4886-9ab4-95ed3c96a31c"}
	I1220 01:30:28.278528   19360 start.go:143] virtualization: kvm guest
	I1220 01:30:28.280219   19360 out.go:179] * [functional-562123] minikube v1.37.0 on Ubuntu 24.04 (kvm/amd64)
	I1220 01:30:28.281304   19360 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1220 01:30:28.281308   19360 notify.go:221] Checking for updates...
	I1220 01:30:28.282529   19360 out.go:179]   - KUBECONFIG=/home/minitest/minikube-integration/7cd9f41b7421760cf1f1eaa8725bdb975037b06d-7160/kubeconfig
	I1220 01:30:28.283624   19360 out.go:179]   - MINIKUBE_HOME=/home/minitest/minikube-integration/7cd9f41b7421760cf1f1eaa8725bdb975037b06d-7160/.minikube
	I1220 01:30:28.284777   19360 out.go:179]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I1220 01:30:28.285857   19360 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I1220 01:30:28.287548   19360 config.go:182] Loaded profile config "functional-562123": Driver=kvm2, ContainerRuntime=docker, KubernetesVersion=v1.35.0-rc.1
	I1220 01:30:28.288237   19360 driver.go:422] Setting default libvirt URI to qemu:///system
	I1220 01:30:28.321115   19360 out.go:179] * Using the kvm2 driver based on existing profile
	I1220 01:30:28.322084   19360 start.go:309] selected driver: kvm2
	I1220 01:30:28.322104   19360 start.go:928] validating driver "kvm2" against &{Name:functional-562123 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/22186/minikube-v1.37.0-1765965980-22186-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765966054-22186@sha256:1c173489767e6632c410d2554f1a2272f032a423dd528157e201daadfe3c43f0 Memory:4096 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kuber
netesVersion:v1.35.0-rc.1 ClusterName:functional-562123 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:apiserver Key:enable-admission-plugins Value:NamespaceAutoProvision}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.29 Port:8441 KubernetesVersion:v1.35.0-rc.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:
26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1220 01:30:28.322254   19360 start.go:939] status for kvm2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1220 01:30:28.324427   19360 out.go:203] 
	W1220 01:30:28.325492   19360 out.go:285] X Exiting due to RSRC_INSUFFICIENT_REQ_MEMORY: Requested memory allocation 250MiB is less than the usable minimum of 1800MB
	X Exiting due to RSRC_INSUFFICIENT_REQ_MEMORY: Requested memory allocation 250MiB is less than the usable minimum of 1800MB
	I1220 01:30:28.326457   19360 out.go:203] 

                                                
                                                
** /stderr **
functional_test.go:1006: (dbg) Run:  out/minikube-linux-amd64 start -p functional-562123 --dry-run --alsologtostderr -v=1 --driver=kvm2  --container-runtime=docker --kubernetes-version=v1.35.0-rc.1
--- PASS: TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/DryRun (0.21s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/InternationalLanguage (0.11s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/InternationalLanguage
=== PAUSE TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/InternationalLanguage

                                                
                                                

                                                
                                                
=== CONT  TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/InternationalLanguage
functional_test.go:1035: (dbg) Run:  out/minikube-linux-amd64 start -p functional-562123 --dry-run --memory 250MB --alsologtostderr --driver=kvm2  --container-runtime=docker --kubernetes-version=v1.35.0-rc.1
functional_test.go:1035: (dbg) Non-zero exit: out/minikube-linux-amd64 start -p functional-562123 --dry-run --memory 250MB --alsologtostderr --driver=kvm2  --container-runtime=docker --kubernetes-version=v1.35.0-rc.1: exit status 23 (105.701473ms)

                                                
                                                
-- stdout --
	* [functional-562123] minikube v1.37.0 sur Ubuntu 24.04 (kvm/amd64)
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/minitest/minikube-integration/7cd9f41b7421760cf1f1eaa8725bdb975037b06d-7160/kubeconfig
	  - MINIKUBE_HOME=/home/minitest/minikube-integration/7cd9f41b7421760cf1f1eaa8725bdb975037b06d-7160/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-amd64
	  - MINIKUBE_FORCE_SYSTEMD=
	* Utilisation du pilote kvm2 basé sur le profil existant
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1220 01:30:28.489736   19391 out.go:360] Setting OutFile to fd 1 ...
	I1220 01:30:28.489990   19391 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1220 01:30:28.490001   19391 out.go:374] Setting ErrFile to fd 2...
	I1220 01:30:28.490006   19391 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1220 01:30:28.490366   19391 root.go:338] Updating PATH: /home/minitest/minikube-integration/7cd9f41b7421760cf1f1eaa8725bdb975037b06d-7160/.minikube/bin
	I1220 01:30:28.490882   19391 out.go:368] Setting JSON to false
	I1220 01:30:28.491924   19391 start.go:133] hostinfo: {"hostname":"minitest-vm-9d09530a.c.k8s-infra-e2e-boskos-103.internal","uptime":942,"bootTime":1766193287,"procs":183,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"24.04","kernelVersion":"6.14.0-1021-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"324b1d65-3a78-4886-9ab4-95ed3c96a31c"}
	I1220 01:30:28.492021   19391 start.go:143] virtualization: kvm guest
	I1220 01:30:28.493650   19391 out.go:179] * [functional-562123] minikube v1.37.0 sur Ubuntu 24.04 (kvm/amd64)
	I1220 01:30:28.494827   19391 notify.go:221] Checking for updates...
	I1220 01:30:28.494851   19391 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1220 01:30:28.496118   19391 out.go:179]   - KUBECONFIG=/home/minitest/minikube-integration/7cd9f41b7421760cf1f1eaa8725bdb975037b06d-7160/kubeconfig
	I1220 01:30:28.497910   19391 out.go:179]   - MINIKUBE_HOME=/home/minitest/minikube-integration/7cd9f41b7421760cf1f1eaa8725bdb975037b06d-7160/.minikube
	I1220 01:30:28.499065   19391 out.go:179]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I1220 01:30:28.500173   19391 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I1220 01:30:28.501685   19391 config.go:182] Loaded profile config "functional-562123": Driver=kvm2, ContainerRuntime=docker, KubernetesVersion=v1.35.0-rc.1
	I1220 01:30:28.502196   19391 driver.go:422] Setting default libvirt URI to qemu:///system
	I1220 01:30:28.534469   19391 out.go:179] * Utilisation du pilote kvm2 basé sur le profil existant
	I1220 01:30:28.535493   19391 start.go:309] selected driver: kvm2
	I1220 01:30:28.535507   19391 start.go:928] validating driver "kvm2" against &{Name:functional-562123 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/22186/minikube-v1.37.0-1765965980-22186-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765966054-22186@sha256:1c173489767e6632c410d2554f1a2272f032a423dd528157e201daadfe3c43f0 Memory:4096 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kuber
netesVersion:v1.35.0-rc.1 ClusterName:functional-562123 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:apiserver Key:enable-admission-plugins Value:NamespaceAutoProvision}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.29 Port:8441 KubernetesVersion:v1.35.0-rc.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:
26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1220 01:30:28.535619   19391 start.go:939] status for kvm2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1220 01:30:28.537477   19391 out.go:203] 
	W1220 01:30:28.538572   19391 out.go:285] X Fermeture en raison de RSRC_INSUFFICIENT_REQ_MEMORY : L'allocation de mémoire demandée 250 Mio est inférieure au minimum utilisable de 1800 Mo
	X Fermeture en raison de RSRC_INSUFFICIENT_REQ_MEMORY : L'allocation de mémoire demandée 250 Mio est inférieure au minimum utilisable de 1800 Mo
	I1220 01:30:28.539588   19391 out.go:203] 

                                                
                                                
** /stderr **
--- PASS: TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/InternationalLanguage (0.11s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/StatusCmd (0.67s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/StatusCmd
=== PAUSE TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/StatusCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/StatusCmd
functional_test.go:869: (dbg) Run:  out/minikube-linux-amd64 -p functional-562123 status
functional_test.go:875: (dbg) Run:  out/minikube-linux-amd64 -p functional-562123 status -f host:{{.Host}},kublet:{{.Kubelet}},apiserver:{{.APIServer}},kubeconfig:{{.Kubeconfig}}
functional_test.go:887: (dbg) Run:  out/minikube-linux-amd64 -p functional-562123 status -o json
--- PASS: TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/StatusCmd (0.67s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/ServiceCmdConnect (8.48s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/ServiceCmdConnect
=== PAUSE TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/ServiceCmdConnect

                                                
                                                

                                                
                                                
=== CONT  TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/ServiceCmdConnect
functional_test.go:1636: (dbg) Run:  kubectl --context functional-562123 create deployment hello-node-connect --image kicbase/echo-server
functional_test.go:1640: (dbg) Run:  kubectl --context functional-562123 expose deployment hello-node-connect --type=NodePort --port=8080
functional_test.go:1645: (dbg) TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/ServiceCmdConnect: waiting 10m0s for pods matching "app=hello-node-connect" in namespace "default" ...
helpers_test.go:353: "hello-node-connect-9f67c86d4-wt2w6" [e50258e8-46d3-46ac-9773-5fd6417a3f15] Pending / Ready:ContainersNotReady (containers with unready status: [echo-server]) / ContainersReady:ContainersNotReady (containers with unready status: [echo-server])
helpers_test.go:353: "hello-node-connect-9f67c86d4-wt2w6" [e50258e8-46d3-46ac-9773-5fd6417a3f15] Running
functional_test.go:1645: (dbg) TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/ServiceCmdConnect: app=hello-node-connect healthy within 8.005083389s
functional_test.go:1654: (dbg) Run:  out/minikube-linux-amd64 -p functional-562123 service hello-node-connect --url
functional_test.go:1660: found endpoint for hello-node-connect: http://192.168.39.29:32367
functional_test.go:1680: http://192.168.39.29:32367: success! body:
Request served by hello-node-connect-9f67c86d4-wt2w6

                                                
                                                
HTTP/1.1 GET /

                                                
                                                
Host: 192.168.39.29:32367
Accept-Encoding: gzip
User-Agent: Go-http-client/1.1
--- PASS: TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/ServiceCmdConnect (8.48s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/AddonsCmd (0.19s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/AddonsCmd
=== PAUSE TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/AddonsCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/AddonsCmd
functional_test.go:1695: (dbg) Run:  out/minikube-linux-amd64 -p functional-562123 addons list
functional_test.go:1707: (dbg) Run:  out/minikube-linux-amd64 -p functional-562123 addons list -o json
--- PASS: TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/AddonsCmd (0.19s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/PersistentVolumeClaim (49.04s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/PersistentVolumeClaim
=== PAUSE TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/PersistentVolumeClaim

                                                
                                                

                                                
                                                
=== CONT  TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/PersistentVolumeClaim
functional_test_pvc_test.go:50: (dbg) TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/PersistentVolumeClaim: waiting 4m0s for pods matching "integration-test=storage-provisioner" in namespace "kube-system" ...
helpers_test.go:353: "storage-provisioner" [fb91e954-d7f2-4129-b8cb-1f2c70b6efe4] Running
functional_test_pvc_test.go:50: (dbg) TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/PersistentVolumeClaim: integration-test=storage-provisioner healthy within 5.003646991s
functional_test_pvc_test.go:55: (dbg) Run:  kubectl --context functional-562123 get storageclass -o=json
functional_test_pvc_test.go:75: (dbg) Run:  kubectl --context functional-562123 apply -f testdata/storage-provisioner/pvc.yaml
functional_test_pvc_test.go:82: (dbg) Run:  kubectl --context functional-562123 get pvc myclaim -o=json
I1220 01:30:24.598939   13018 retry.go:31] will retry after 1.16381806s: testpvc phase = "Pending", want "Bound" (msg={TypeMeta:{Kind:PersistentVolumeClaim APIVersion:v1} ObjectMeta:{Name:myclaim GenerateName: Namespace:default SelfLink: UID:3abb04c4-208d-4055-828b-b2a8b5dbd15f ResourceVersion:750 Generation:0 CreationTimestamp:2025-12-20 01:30:24 +0000 UTC DeletionTimestamp:<nil> DeletionGracePeriodSeconds:<nil> Labels:map[] Annotations:map[kubectl.kubernetes.io/last-applied-configuration:{"apiVersion":"v1","kind":"PersistentVolumeClaim","metadata":{"annotations":{},"name":"myclaim","namespace":"default"},"spec":{"accessModes":["ReadWriteOnce"],"resources":{"requests":{"storage":"500Mi"}},"volumeMode":"Filesystem"}}
volume.beta.kubernetes.io/storage-provisioner:k8s.io/minikube-hostpath volume.kubernetes.io/storage-provisioner:k8s.io/minikube-hostpath] OwnerReferences:[] Finalizers:[kubernetes.io/pvc-protection] ManagedFields:[]} Spec:{AccessModes:[ReadWriteOnce] Selector:nil Resources:{Limits:map[] Requests:map[storage:{i:{value:524288000 scale:0} d:{Dec:<nil>} s:500Mi Format:BinarySI}]} VolumeName: StorageClassName:0xc000b51090 VolumeMode:0xc000b510a0 DataSource:nil DataSourceRef:nil VolumeAttributesClassName:<nil>} Status:{Phase:Pending AccessModes:[] Capacity:map[] Conditions:[] AllocatedResources:map[] AllocatedResourceStatuses:map[] CurrentVolumeAttributesClassName:<nil> ModifyVolumeStatus:nil}})
functional_test_pvc_test.go:82: (dbg) Run:  kubectl --context functional-562123 get pvc myclaim -o=json
I1220 01:30:25.843716   13018 retry.go:31] will retry after 3.090144328s: testpvc phase = "Pending", want "Bound" (msg={TypeMeta:{Kind:PersistentVolumeClaim APIVersion:v1} ObjectMeta:{Name:myclaim GenerateName: Namespace:default SelfLink: UID:3abb04c4-208d-4055-828b-b2a8b5dbd15f ResourceVersion:750 Generation:0 CreationTimestamp:2025-12-20 01:30:24 +0000 UTC DeletionTimestamp:<nil> DeletionGracePeriodSeconds:<nil> Labels:map[] Annotations:map[kubectl.kubernetes.io/last-applied-configuration:{"apiVersion":"v1","kind":"PersistentVolumeClaim","metadata":{"annotations":{},"name":"myclaim","namespace":"default"},"spec":{"accessModes":["ReadWriteOnce"],"resources":{"requests":{"storage":"500Mi"}},"volumeMode":"Filesystem"}}
volume.beta.kubernetes.io/storage-provisioner:k8s.io/minikube-hostpath volume.kubernetes.io/storage-provisioner:k8s.io/minikube-hostpath] OwnerReferences:[] Finalizers:[kubernetes.io/pvc-protection] ManagedFields:[]} Spec:{AccessModes:[ReadWriteOnce] Selector:nil Resources:{Limits:map[] Requests:map[storage:{i:{value:524288000 scale:0} d:{Dec:<nil>} s:500Mi Format:BinarySI}]} VolumeName: StorageClassName:0xc001c38500 VolumeMode:0xc001c38510 DataSource:nil DataSourceRef:nil VolumeAttributesClassName:<nil>} Status:{Phase:Pending AccessModes:[] Capacity:map[] Conditions:[] AllocatedResources:map[] AllocatedResourceStatuses:map[] CurrentVolumeAttributesClassName:<nil> ModifyVolumeStatus:nil}})
functional_test_pvc_test.go:82: (dbg) Run:  kubectl --context functional-562123 get pvc myclaim -o=json
functional_test_pvc_test.go:131: (dbg) Run:  kubectl --context functional-562123 apply -f testdata/storage-provisioner/pod.yaml
functional_test_pvc_test.go:140: (dbg) TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/PersistentVolumeClaim: waiting 6m0s for pods matching "test=storage-provisioner" in namespace "default" ...
helpers_test.go:353: "sp-pod" [41f2b766-123b-4826-89cd-7d4e35a90095] Pending
helpers_test.go:353: "sp-pod" [41f2b766-123b-4826-89cd-7d4e35a90095] Pending / Ready:ContainersNotReady (containers with unready status: [myfrontend]) / ContainersReady:ContainersNotReady (containers with unready status: [myfrontend])
helpers_test.go:353: "sp-pod" [41f2b766-123b-4826-89cd-7d4e35a90095] Running
functional_test_pvc_test.go:140: (dbg) TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/PersistentVolumeClaim: test=storage-provisioner healthy within 29.011796871s
functional_test_pvc_test.go:106: (dbg) Run:  kubectl --context functional-562123 exec sp-pod -- touch /tmp/mount/foo
functional_test_pvc_test.go:112: (dbg) Run:  kubectl --context functional-562123 delete -f testdata/storage-provisioner/pod.yaml
functional_test_pvc_test.go:112: (dbg) Done: kubectl --context functional-562123 delete -f testdata/storage-provisioner/pod.yaml: (1.542166124s)
functional_test_pvc_test.go:131: (dbg) Run:  kubectl --context functional-562123 apply -f testdata/storage-provisioner/pod.yaml
I1220 01:31:00.236836   13018 detect.go:223] nested VM detected
functional_test_pvc_test.go:140: (dbg) TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/PersistentVolumeClaim: waiting 6m0s for pods matching "test=storage-provisioner" in namespace "default" ...
helpers_test.go:353: "sp-pod" [e51758a0-79b8-4bc2-8ef1-a957c34ae13f] Pending
helpers_test.go:353: "sp-pod" [e51758a0-79b8-4bc2-8ef1-a957c34ae13f] Pending / Ready:ContainersNotReady (containers with unready status: [myfrontend]) / ContainersReady:ContainersNotReady (containers with unready status: [myfrontend])
helpers_test.go:353: "sp-pod" [e51758a0-79b8-4bc2-8ef1-a957c34ae13f] Running
2025/12/20 01:31:03 [DEBUG] GET http://127.0.0.1:36195/api/v1/namespaces/kubernetes-dashboard/services/http:kubernetes-dashboard:/proxy/
functional_test_pvc_test.go:140: (dbg) TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/PersistentVolumeClaim: test=storage-provisioner healthy within 8.004118189s
functional_test_pvc_test.go:120: (dbg) Run:  kubectl --context functional-562123 exec sp-pod -- ls /tmp/mount
--- PASS: TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/PersistentVolumeClaim (49.04s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/SSHCmd (0.33s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/SSHCmd
=== PAUSE TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/SSHCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/SSHCmd
functional_test.go:1730: (dbg) Run:  out/minikube-linux-amd64 -p functional-562123 ssh "echo hello"
functional_test.go:1747: (dbg) Run:  out/minikube-linux-amd64 -p functional-562123 ssh "cat /etc/hostname"
--- PASS: TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/SSHCmd (0.33s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/CpCmd (1.15s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/CpCmd
=== PAUSE TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/CpCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/CpCmd
helpers_test.go:574: (dbg) Run:  out/minikube-linux-amd64 -p functional-562123 cp testdata/cp-test.txt /home/docker/cp-test.txt
helpers_test.go:552: (dbg) Run:  out/minikube-linux-amd64 -p functional-562123 ssh -n functional-562123 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:574: (dbg) Run:  out/minikube-linux-amd64 -p functional-562123 cp functional-562123:/home/docker/cp-test.txt /tmp/TestFunctionalNewestKubernetesVersionv1.35.0-rc.1parallelCpCm2846201119/001/cp-test.txt
helpers_test.go:552: (dbg) Run:  out/minikube-linux-amd64 -p functional-562123 ssh -n functional-562123 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:574: (dbg) Run:  out/minikube-linux-amd64 -p functional-562123 cp testdata/cp-test.txt /tmp/does/not/exist/cp-test.txt
helpers_test.go:552: (dbg) Run:  out/minikube-linux-amd64 -p functional-562123 ssh -n functional-562123 "sudo cat /tmp/does/not/exist/cp-test.txt"
--- PASS: TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/CpCmd (1.15s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/MySQL (42.01s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/MySQL
=== PAUSE TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/MySQL

                                                
                                                

                                                
                                                
=== CONT  TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/MySQL
functional_test.go:1798: (dbg) Run:  kubectl --context functional-562123 replace --force -f testdata/mysql.yaml
functional_test.go:1804: (dbg) TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/MySQL: waiting 10m0s for pods matching "app=mysql" in namespace "default" ...
helpers_test.go:353: "mysql-7d7b65bc95-xrv64" [9c40fe33-a939-44c0-9c56-81ae6ff36ec5] Pending / Ready:ContainersNotReady (containers with unready status: [mysql]) / ContainersReady:ContainersNotReady (containers with unready status: [mysql])
helpers_test.go:353: "mysql-7d7b65bc95-xrv64" [9c40fe33-a939-44c0-9c56-81ae6ff36ec5] Running
functional_test.go:1804: (dbg) TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/MySQL: app=mysql healthy within 28.007129854s
functional_test.go:1812: (dbg) Run:  kubectl --context functional-562123 exec mysql-7d7b65bc95-xrv64 -- mysql -ppassword -e "show databases;"
functional_test.go:1812: (dbg) Non-zero exit: kubectl --context functional-562123 exec mysql-7d7b65bc95-xrv64 -- mysql -ppassword -e "show databases;": exit status 1 (276.470676ms)

                                                
                                                
** stderr ** 
	mysql: [Warning] Using a password on the command line interface can be insecure.
	ERROR 2002 (HY000): Can't connect to local MySQL server through socket '/var/run/mysqld/mysqld.sock' (2)
	command terminated with exit code 1

                                                
                                                
** /stderr **
I1220 01:30:56.995105   13018 retry.go:31] will retry after 641.875468ms: exit status 1
functional_test.go:1812: (dbg) Run:  kubectl --context functional-562123 exec mysql-7d7b65bc95-xrv64 -- mysql -ppassword -e "show databases;"
functional_test.go:1812: (dbg) Non-zero exit: kubectl --context functional-562123 exec mysql-7d7b65bc95-xrv64 -- mysql -ppassword -e "show databases;": exit status 1 (206.984027ms)

                                                
                                                
** stderr ** 
	mysql: [Warning] Using a password on the command line interface can be insecure.
	ERROR 2002 (HY000): Can't connect to local MySQL server through socket '/var/run/mysqld/mysqld.sock' (2)
	command terminated with exit code 1

                                                
                                                
** /stderr **
I1220 01:30:57.844761   13018 retry.go:31] will retry after 1.465271547s: exit status 1
functional_test.go:1812: (dbg) Run:  kubectl --context functional-562123 exec mysql-7d7b65bc95-xrv64 -- mysql -ppassword -e "show databases;"
functional_test.go:1812: (dbg) Non-zero exit: kubectl --context functional-562123 exec mysql-7d7b65bc95-xrv64 -- mysql -ppassword -e "show databases;": exit status 1 (266.646318ms)

                                                
                                                
** stderr ** 
	mysql: [Warning] Using a password on the command line interface can be insecure.
	ERROR 1045 (28000): Access denied for user 'root'@'localhost' (using password: YES)
	command terminated with exit code 1

                                                
                                                
** /stderr **
I1220 01:30:59.577301   13018 retry.go:31] will retry after 2.397624032s: exit status 1
functional_test.go:1812: (dbg) Run:  kubectl --context functional-562123 exec mysql-7d7b65bc95-xrv64 -- mysql -ppassword -e "show databases;"
functional_test.go:1812: (dbg) Non-zero exit: kubectl --context functional-562123 exec mysql-7d7b65bc95-xrv64 -- mysql -ppassword -e "show databases;": exit status 1 (223.633458ms)

                                                
                                                
** stderr ** 
	mysql: [Warning] Using a password on the command line interface can be insecure.
	ERROR 1045 (28000): Access denied for user 'root'@'localhost' (using password: YES)
	command terminated with exit code 1

                                                
                                                
** /stderr **
I1220 01:31:02.199329   13018 retry.go:31] will retry after 3.2223784s: exit status 1
functional_test.go:1812: (dbg) Run:  kubectl --context functional-562123 exec mysql-7d7b65bc95-xrv64 -- mysql -ppassword -e "show databases;"
functional_test.go:1812: (dbg) Non-zero exit: kubectl --context functional-562123 exec mysql-7d7b65bc95-xrv64 -- mysql -ppassword -e "show databases;": exit status 1 (158.451769ms)

                                                
                                                
** stderr ** 
	mysql: [Warning] Using a password on the command line interface can be insecure.
	ERROR 2002 (HY000): Can't connect to local MySQL server through socket '/var/run/mysqld/mysqld.sock' (2)
	command terminated with exit code 1

                                                
                                                
** /stderr **
I1220 01:31:05.580773   13018 retry.go:31] will retry after 4.838962074s: exit status 1
functional_test.go:1812: (dbg) Run:  kubectl --context functional-562123 exec mysql-7d7b65bc95-xrv64 -- mysql -ppassword -e "show databases;"
--- PASS: TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/MySQL (42.01s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/FileSync (0.15s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/FileSync
=== PAUSE TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/FileSync

                                                
                                                

                                                
                                                
=== CONT  TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/FileSync
functional_test.go:1934: Checking for existence of /etc/test/nested/copy/13018/hosts within VM
functional_test.go:1936: (dbg) Run:  out/minikube-linux-amd64 -p functional-562123 ssh "sudo cat /etc/test/nested/copy/13018/hosts"
functional_test.go:1941: file sync test content: Test file for checking file sync process
--- PASS: TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/FileSync (0.15s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/CertSync (1.05s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/CertSync
=== PAUSE TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/CertSync

                                                
                                                

                                                
                                                
=== CONT  TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/CertSync
functional_test.go:1977: Checking for existence of /etc/ssl/certs/13018.pem within VM
functional_test.go:1978: (dbg) Run:  out/minikube-linux-amd64 -p functional-562123 ssh "sudo cat /etc/ssl/certs/13018.pem"
functional_test.go:1977: Checking for existence of /usr/share/ca-certificates/13018.pem within VM
functional_test.go:1978: (dbg) Run:  out/minikube-linux-amd64 -p functional-562123 ssh "sudo cat /usr/share/ca-certificates/13018.pem"
functional_test.go:1977: Checking for existence of /etc/ssl/certs/51391683.0 within VM
functional_test.go:1978: (dbg) Run:  out/minikube-linux-amd64 -p functional-562123 ssh "sudo cat /etc/ssl/certs/51391683.0"
functional_test.go:2004: Checking for existence of /etc/ssl/certs/130182.pem within VM
functional_test.go:2005: (dbg) Run:  out/minikube-linux-amd64 -p functional-562123 ssh "sudo cat /etc/ssl/certs/130182.pem"
functional_test.go:2004: Checking for existence of /usr/share/ca-certificates/130182.pem within VM
functional_test.go:2005: (dbg) Run:  out/minikube-linux-amd64 -p functional-562123 ssh "sudo cat /usr/share/ca-certificates/130182.pem"
functional_test.go:2004: Checking for existence of /etc/ssl/certs/3ec20f2e.0 within VM
functional_test.go:2005: (dbg) Run:  out/minikube-linux-amd64 -p functional-562123 ssh "sudo cat /etc/ssl/certs/3ec20f2e.0"
--- PASS: TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/CertSync (1.05s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/NodeLabels (0.06s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/NodeLabels
=== PAUSE TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/NodeLabels

                                                
                                                

                                                
                                                
=== CONT  TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/NodeLabels
functional_test.go:234: (dbg) Run:  kubectl --context functional-562123 get nodes --output=go-template "--template='{{range $k, $v := (index .items 0).metadata.labels}}{{$k}} {{end}}'"
--- PASS: TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/NodeLabels (0.06s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/NonActiveRuntimeDisabled (0.18s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/NonActiveRuntimeDisabled
=== PAUSE TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/NonActiveRuntimeDisabled

                                                
                                                

                                                
                                                
=== CONT  TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/NonActiveRuntimeDisabled
functional_test.go:2032: (dbg) Run:  out/minikube-linux-amd64 -p functional-562123 ssh "sudo systemctl is-active crio"
functional_test.go:2032: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-562123 ssh "sudo systemctl is-active crio": exit status 1 (183.324988ms)

                                                
                                                
-- stdout --
	inactive

                                                
                                                
-- /stdout --
** stderr ** 
	ssh: Process exited with status 3

                                                
                                                
** /stderr **
--- PASS: TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/NonActiveRuntimeDisabled (0.18s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/License (0.38s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/License
=== PAUSE TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/License

                                                
                                                

                                                
                                                
=== CONT  TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/License
functional_test.go:2293: (dbg) Run:  out/minikube-linux-amd64 license
--- PASS: TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/License (0.38s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/ServiceCmd/DeployApp (8.26s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/ServiceCmd/DeployApp
functional_test.go:1451: (dbg) Run:  kubectl --context functional-562123 create deployment hello-node --image kicbase/echo-server
functional_test.go:1455: (dbg) Run:  kubectl --context functional-562123 expose deployment hello-node --type=NodePort --port=8080
functional_test.go:1460: (dbg) TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/ServiceCmd/DeployApp: waiting 10m0s for pods matching "app=hello-node" in namespace "default" ...
helpers_test.go:353: "hello-node-5758569b79-644mc" [1d083c12-c8da-4442-a6fa-1a9b49cf04f4] Pending / Ready:ContainersNotReady (containers with unready status: [echo-server]) / ContainersReady:ContainersNotReady (containers with unready status: [echo-server])
helpers_test.go:353: "hello-node-5758569b79-644mc" [1d083c12-c8da-4442-a6fa-1a9b49cf04f4] Running
functional_test.go:1460: (dbg) TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/ServiceCmd/DeployApp: app=hello-node healthy within 8.063966041s
--- PASS: TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/ServiceCmd/DeployApp (8.26s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/Version/short (0.06s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/Version/short
=== PAUSE TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/Version/short

                                                
                                                

                                                
                                                
=== CONT  TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/Version/short
functional_test.go:2261: (dbg) Run:  out/minikube-linux-amd64 -p functional-562123 version --short
--- PASS: TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/Version/short (0.06s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/Version/components (0.51s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/Version/components
=== PAUSE TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/Version/components

                                                
                                                

                                                
                                                
=== CONT  TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/Version/components
functional_test.go:2275: (dbg) Run:  out/minikube-linux-amd64 -p functional-562123 version -o=json --components
--- PASS: TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/Version/components (0.51s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/ImageCommands/ImageListShort (0.18s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/ImageCommands/ImageListShort
=== PAUSE TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/ImageCommands/ImageListShort

                                                
                                                

                                                
                                                
=== CONT  TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/ImageCommands/ImageListShort
functional_test.go:276: (dbg) Run:  out/minikube-linux-amd64 -p functional-562123 image ls --format short --alsologtostderr
functional_test.go:281: (dbg) Stdout: out/minikube-linux-amd64 -p functional-562123 image ls --format short --alsologtostderr:
registry.k8s.io/pause:latest
registry.k8s.io/pause:3.3
registry.k8s.io/pause:3.10.1
registry.k8s.io/pause:3.1
registry.k8s.io/kube-scheduler:v1.35.0-rc.1
registry.k8s.io/kube-proxy:v1.35.0-rc.1
registry.k8s.io/kube-controller-manager:v1.35.0-rc.1
registry.k8s.io/kube-apiserver:v1.35.0-rc.1
registry.k8s.io/etcd:3.6.6-0
registry.k8s.io/coredns/coredns:v1.13.1
gcr.io/k8s-minikube/storage-provisioner:v5
gcr.io/k8s-minikube/busybox:1.28.4-glibc
docker.io/library/minikube-local-cache-test:functional-562123
docker.io/kicbase/echo-server:latest
docker.io/kicbase/echo-server:functional-562123
functional_test.go:284: (dbg) Stderr: out/minikube-linux-amd64 -p functional-562123 image ls --format short --alsologtostderr:
I1220 01:30:34.793449   19610 out.go:360] Setting OutFile to fd 1 ...
I1220 01:30:34.793558   19610 out.go:408] TERM=,COLORTERM=, which probably does not support color
I1220 01:30:34.793570   19610 out.go:374] Setting ErrFile to fd 2...
I1220 01:30:34.793575   19610 out.go:408] TERM=,COLORTERM=, which probably does not support color
I1220 01:30:34.793769   19610 root.go:338] Updating PATH: /home/minitest/minikube-integration/7cd9f41b7421760cf1f1eaa8725bdb975037b06d-7160/.minikube/bin
I1220 01:30:34.794360   19610 config.go:182] Loaded profile config "functional-562123": Driver=kvm2, ContainerRuntime=docker, KubernetesVersion=v1.35.0-rc.1
I1220 01:30:34.794487   19610 config.go:182] Loaded profile config "functional-562123": Driver=kvm2, ContainerRuntime=docker, KubernetesVersion=v1.35.0-rc.1
I1220 01:30:34.797668   19610 ssh_runner.go:195] Run: systemctl --version
I1220 01:30:34.800143   19610 main.go:144] libmachine: domain functional-562123 has defined MAC address 52:54:00:48:a7:38 in network mk-functional-562123
I1220 01:30:34.800572   19610 main.go:144] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:48:a7:38", ip: ""} in network mk-functional-562123: {Iface:virbr1 ExpiryTime:2025-12-20 02:27:41 +0000 UTC Type:0 Mac:52:54:00:48:a7:38 Iaid: IPaddr:192.168.39.29 Prefix:24 Hostname:functional-562123 Clientid:01:52:54:00:48:a7:38}
I1220 01:30:34.800602   19610 main.go:144] libmachine: domain functional-562123 has defined IP address 192.168.39.29 and MAC address 52:54:00:48:a7:38 in network mk-functional-562123
I1220 01:30:34.800795   19610 sshutil.go:53] new ssh client: &{IP:192.168.39.29 Port:22 SSHKeyPath:/home/minitest/minikube-integration/7cd9f41b7421760cf1f1eaa8725bdb975037b06d-7160/.minikube/machines/functional-562123/id_rsa Username:docker}
I1220 01:30:34.881879   19610 ssh_runner.go:195] Run: docker images --no-trunc --format "{{json .}}"
--- PASS: TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/ImageCommands/ImageListShort (0.18s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/ImageCommands/ImageListTable (0.19s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/ImageCommands/ImageListTable
=== PAUSE TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/ImageCommands/ImageListTable

                                                
                                                

                                                
                                                
=== CONT  TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/ImageCommands/ImageListTable
functional_test.go:276: (dbg) Run:  out/minikube-linux-amd64 -p functional-562123 image ls --format table --alsologtostderr
functional_test.go:281: (dbg) Stdout: out/minikube-linux-amd64 -p functional-562123 image ls --format table --alsologtostderr:
┌─────────────────────────────────────────────┬───────────────────┬───────────────┬────────┐
│                    IMAGE                    │        TAG        │   IMAGE ID    │  SIZE  │
├─────────────────────────────────────────────┼───────────────────┼───────────────┼────────┤
│ registry.k8s.io/kube-apiserver              │ v1.35.0-rc.1      │ 58865405a13bc │ 89.8MB │
│ registry.k8s.io/coredns/coredns             │ v1.13.1           │ aa5e3ebc0dfed │ 78.1MB │
│ registry.k8s.io/pause                       │ 3.10.1            │ cd073f4c5f6a8 │ 736kB  │
│ registry.k8s.io/pause                       │ latest            │ 350b164e7ae1d │ 240kB  │
│ registry.k8s.io/kube-proxy                  │ v1.35.0-rc.1      │ af0321f3a4f38 │ 70.7MB │
│ gcr.io/k8s-minikube/busybox                 │ 1.28.4-glibc      │ 56cc512116c8f │ 4.4MB  │
│ registry.k8s.io/pause                       │ 3.1               │ da86e6ba6ca19 │ 742kB  │
│ localhost/my-image                          │ functional-562123 │ 985d3e9c4f1d5 │ 1.24MB │
│ registry.k8s.io/kube-scheduler              │ v1.35.0-rc.1      │ 73f80cdc073da │ 51.7MB │
│ registry.k8s.io/kube-controller-manager     │ v1.35.0-rc.1      │ 5032a56602e1b │ 75.8MB │
│ gcr.io/k8s-minikube/storage-provisioner     │ v5                │ 6e38f40d628db │ 31.5MB │
│ registry.k8s.io/pause                       │ 3.3               │ 0184c1613d929 │ 683kB  │
│ docker.io/library/minikube-local-cache-test │ functional-562123 │ f2b4333361007 │ 30B    │
│ registry.k8s.io/etcd                        │ 3.6.6-0           │ 0a108f7189562 │ 62.5MB │
│ docker.io/kicbase/echo-server               │ functional-562123 │ 9056ab77afb8e │ 4.94MB │
│ docker.io/kicbase/echo-server               │ latest            │ 9056ab77afb8e │ 4.94MB │
└─────────────────────────────────────────────┴───────────────────┴───────────────┴────────┘
functional_test.go:284: (dbg) Stderr: out/minikube-linux-amd64 -p functional-562123 image ls --format table --alsologtostderr:
I1220 01:30:38.086271   19697 out.go:360] Setting OutFile to fd 1 ...
I1220 01:30:38.086557   19697 out.go:408] TERM=,COLORTERM=, which probably does not support color
I1220 01:30:38.086568   19697 out.go:374] Setting ErrFile to fd 2...
I1220 01:30:38.086574   19697 out.go:408] TERM=,COLORTERM=, which probably does not support color
I1220 01:30:38.086884   19697 root.go:338] Updating PATH: /home/minitest/minikube-integration/7cd9f41b7421760cf1f1eaa8725bdb975037b06d-7160/.minikube/bin
I1220 01:30:38.087775   19697 config.go:182] Loaded profile config "functional-562123": Driver=kvm2, ContainerRuntime=docker, KubernetesVersion=v1.35.0-rc.1
I1220 01:30:38.087930   19697 config.go:182] Loaded profile config "functional-562123": Driver=kvm2, ContainerRuntime=docker, KubernetesVersion=v1.35.0-rc.1
I1220 01:30:38.090485   19697 ssh_runner.go:195] Run: systemctl --version
I1220 01:30:38.093017   19697 main.go:144] libmachine: domain functional-562123 has defined MAC address 52:54:00:48:a7:38 in network mk-functional-562123
I1220 01:30:38.093453   19697 main.go:144] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:48:a7:38", ip: ""} in network mk-functional-562123: {Iface:virbr1 ExpiryTime:2025-12-20 02:27:41 +0000 UTC Type:0 Mac:52:54:00:48:a7:38 Iaid: IPaddr:192.168.39.29 Prefix:24 Hostname:functional-562123 Clientid:01:52:54:00:48:a7:38}
I1220 01:30:38.093493   19697 main.go:144] libmachine: domain functional-562123 has defined IP address 192.168.39.29 and MAC address 52:54:00:48:a7:38 in network mk-functional-562123
I1220 01:30:38.093622   19697 sshutil.go:53] new ssh client: &{IP:192.168.39.29 Port:22 SSHKeyPath:/home/minitest/minikube-integration/7cd9f41b7421760cf1f1eaa8725bdb975037b06d-7160/.minikube/machines/functional-562123/id_rsa Username:docker}
I1220 01:30:38.183249   19697 ssh_runner.go:195] Run: docker images --no-trunc --format "{{json .}}"
--- PASS: TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/ImageCommands/ImageListTable (0.19s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/ImageCommands/ImageListJson (0.2s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/ImageCommands/ImageListJson
=== PAUSE TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/ImageCommands/ImageListJson

                                                
                                                

                                                
                                                
=== CONT  TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/ImageCommands/ImageListJson
functional_test.go:276: (dbg) Run:  out/minikube-linux-amd64 -p functional-562123 image ls --format json --alsologtostderr
functional_test.go:281: (dbg) Stdout: out/minikube-linux-amd64 -p functional-562123 image ls --format json --alsologtostderr:
[{"id":"f2b433336100763c2c5a927e4197451113e42a52eaa88e97cad58e6c3ed9cea5","repoDigests":[],"repoTags":["docker.io/library/minikube-local-cache-test:functional-562123"],"size":"30"},{"id":"73f80cdc073daa4d501207f9e6dec1fa9eea5f27e8d347b8a0c4bad8811eecdc","repoDigests":[],"repoTags":["registry.k8s.io/kube-scheduler:v1.35.0-rc.1"],"size":"51700000"},{"id":"58865405a13bccac1d74bc3f446dddd22e6ef0d7ee8b52363c86dd31838976ce","repoDigests":[],"repoTags":["registry.k8s.io/kube-apiserver:v1.35.0-rc.1"],"size":"89800000"},{"id":"0a108f7189562e99793bdecab61fdf1a7c9d913af3385de9da17fb9d6ff430e2","repoDigests":[],"repoTags":["registry.k8s.io/etcd:3.6.6-0"],"size":"62500000"},{"id":"9056ab77afb8e18e04303f11000a9d31b3f16b74c59475b899ae1b342d328d30","repoDigests":[],"repoTags":["docker.io/kicbase/echo-server:functional-562123","docker.io/kicbase/echo-server:latest"],"size":"4940000"},{"id":"6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562","repoDigests":[],"repoTags":["gcr.io/k8s-minikube/storage-provi
sioner:v5"],"size":"31500000"},{"id":"da86e6ba6ca197bf6bc5e9d900febd906b133eaa4750e6bed647b0fbe50ed43e","repoDigests":[],"repoTags":["registry.k8s.io/pause:3.1"],"size":"742000"},{"id":"350b164e7ae1dcddeffadd65c76226c9b6dc5553f5179153fb0e36b78f2a5e06","repoDigests":[],"repoTags":["registry.k8s.io/pause:latest"],"size":"240000"},{"id":"aa5e3ebc0dfed0566805186b9e47110d8f9122291d8bad1497e78873ad291139","repoDigests":[],"repoTags":["registry.k8s.io/coredns/coredns:v1.13.1"],"size":"78100000"},{"id":"cd073f4c5f6a8e9dc6f3125ba00cf60819cae95c1ec84a1f146ee4a9cf9e803f","repoDigests":[],"repoTags":["registry.k8s.io/pause:3.10.1"],"size":"736000"},{"id":"985d3e9c4f1d573f110dfccb11ad08145462022b6f15fe269dcb269723166c98","repoDigests":[],"repoTags":["localhost/my-image:functional-562123"],"size":"1240000"},{"id":"5032a56602e1b9bd8856699701b6148aa1b9901d05b61f893df3b57f84aca614","repoDigests":[],"repoTags":["registry.k8s.io/kube-controller-manager:v1.35.0-rc.1"],"size":"75800000"},{"id":"af0321f3a4f388cfb978464739c323ebf89
1a7b0b50cdfd7179e92f141dad42a","repoDigests":[],"repoTags":["registry.k8s.io/kube-proxy:v1.35.0-rc.1"],"size":"70700000"},{"id":"0184c1613d92931126feb4c548e5da11015513b9e4c104e7305ee8b53b50a9da","repoDigests":[],"repoTags":["registry.k8s.io/pause:3.3"],"size":"683000"},{"id":"56cc512116c8f894f11ce1995460aef1ee0972d48bc2a3bdb1faaac7c020289c","repoDigests":[],"repoTags":["gcr.io/k8s-minikube/busybox:1.28.4-glibc"],"size":"4400000"}]
functional_test.go:284: (dbg) Stderr: out/minikube-linux-amd64 -p functional-562123 image ls --format json --alsologtostderr:
I1220 01:30:37.882866   19686 out.go:360] Setting OutFile to fd 1 ...
I1220 01:30:37.883052   19686 out.go:408] TERM=,COLORTERM=, which probably does not support color
I1220 01:30:37.883060   19686 out.go:374] Setting ErrFile to fd 2...
I1220 01:30:37.883063   19686 out.go:408] TERM=,COLORTERM=, which probably does not support color
I1220 01:30:37.883254   19686 root.go:338] Updating PATH: /home/minitest/minikube-integration/7cd9f41b7421760cf1f1eaa8725bdb975037b06d-7160/.minikube/bin
I1220 01:30:37.883843   19686 config.go:182] Loaded profile config "functional-562123": Driver=kvm2, ContainerRuntime=docker, KubernetesVersion=v1.35.0-rc.1
I1220 01:30:37.883935   19686 config.go:182] Loaded profile config "functional-562123": Driver=kvm2, ContainerRuntime=docker, KubernetesVersion=v1.35.0-rc.1
I1220 01:30:37.886080   19686 ssh_runner.go:195] Run: systemctl --version
I1220 01:30:37.888565   19686 main.go:144] libmachine: domain functional-562123 has defined MAC address 52:54:00:48:a7:38 in network mk-functional-562123
I1220 01:30:37.889030   19686 main.go:144] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:48:a7:38", ip: ""} in network mk-functional-562123: {Iface:virbr1 ExpiryTime:2025-12-20 02:27:41 +0000 UTC Type:0 Mac:52:54:00:48:a7:38 Iaid: IPaddr:192.168.39.29 Prefix:24 Hostname:functional-562123 Clientid:01:52:54:00:48:a7:38}
I1220 01:30:37.889068   19686 main.go:144] libmachine: domain functional-562123 has defined IP address 192.168.39.29 and MAC address 52:54:00:48:a7:38 in network mk-functional-562123
I1220 01:30:37.889298   19686 sshutil.go:53] new ssh client: &{IP:192.168.39.29 Port:22 SSHKeyPath:/home/minitest/minikube-integration/7cd9f41b7421760cf1f1eaa8725bdb975037b06d-7160/.minikube/machines/functional-562123/id_rsa Username:docker}
I1220 01:30:37.980061   19686 ssh_runner.go:195] Run: docker images --no-trunc --format "{{json .}}"
--- PASS: TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/ImageCommands/ImageListJson (0.20s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/ImageCommands/ImageListYaml (0.19s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/ImageCommands/ImageListYaml
=== PAUSE TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/ImageCommands/ImageListYaml

                                                
                                                

                                                
                                                
=== CONT  TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/ImageCommands/ImageListYaml
functional_test.go:276: (dbg) Run:  out/minikube-linux-amd64 -p functional-562123 image ls --format yaml --alsologtostderr
functional_test.go:281: (dbg) Stdout: out/minikube-linux-amd64 -p functional-562123 image ls --format yaml --alsologtostderr:
- id: 73f80cdc073daa4d501207f9e6dec1fa9eea5f27e8d347b8a0c4bad8811eecdc
repoDigests: []
repoTags:
- registry.k8s.io/kube-scheduler:v1.35.0-rc.1
size: "51700000"
- id: 5032a56602e1b9bd8856699701b6148aa1b9901d05b61f893df3b57f84aca614
repoDigests: []
repoTags:
- registry.k8s.io/kube-controller-manager:v1.35.0-rc.1
size: "75800000"
- id: 0a108f7189562e99793bdecab61fdf1a7c9d913af3385de9da17fb9d6ff430e2
repoDigests: []
repoTags:
- registry.k8s.io/etcd:3.6.6-0
size: "62500000"
- id: aa5e3ebc0dfed0566805186b9e47110d8f9122291d8bad1497e78873ad291139
repoDigests: []
repoTags:
- registry.k8s.io/coredns/coredns:v1.13.1
size: "78100000"
- id: 6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562
repoDigests: []
repoTags:
- gcr.io/k8s-minikube/storage-provisioner:v5
size: "31500000"
- id: 56cc512116c8f894f11ce1995460aef1ee0972d48bc2a3bdb1faaac7c020289c
repoDigests: []
repoTags:
- gcr.io/k8s-minikube/busybox:1.28.4-glibc
size: "4400000"
- id: 58865405a13bccac1d74bc3f446dddd22e6ef0d7ee8b52363c86dd31838976ce
repoDigests: []
repoTags:
- registry.k8s.io/kube-apiserver:v1.35.0-rc.1
size: "89800000"
- id: af0321f3a4f388cfb978464739c323ebf891a7b0b50cdfd7179e92f141dad42a
repoDigests: []
repoTags:
- registry.k8s.io/kube-proxy:v1.35.0-rc.1
size: "70700000"
- id: cd073f4c5f6a8e9dc6f3125ba00cf60819cae95c1ec84a1f146ee4a9cf9e803f
repoDigests: []
repoTags:
- registry.k8s.io/pause:3.10.1
size: "736000"
- id: 9056ab77afb8e18e04303f11000a9d31b3f16b74c59475b899ae1b342d328d30
repoDigests: []
repoTags:
- docker.io/kicbase/echo-server:functional-562123
- docker.io/kicbase/echo-server:latest
size: "4940000"
- id: 0184c1613d92931126feb4c548e5da11015513b9e4c104e7305ee8b53b50a9da
repoDigests: []
repoTags:
- registry.k8s.io/pause:3.3
size: "683000"
- id: da86e6ba6ca197bf6bc5e9d900febd906b133eaa4750e6bed647b0fbe50ed43e
repoDigests: []
repoTags:
- registry.k8s.io/pause:3.1
size: "742000"
- id: 350b164e7ae1dcddeffadd65c76226c9b6dc5553f5179153fb0e36b78f2a5e06
repoDigests: []
repoTags:
- registry.k8s.io/pause:latest
size: "240000"
- id: f2b433336100763c2c5a927e4197451113e42a52eaa88e97cad58e6c3ed9cea5
repoDigests: []
repoTags:
- docker.io/library/minikube-local-cache-test:functional-562123
size: "30"

                                                
                                                
functional_test.go:284: (dbg) Stderr: out/minikube-linux-amd64 -p functional-562123 image ls --format yaml --alsologtostderr:
I1220 01:30:34.981935   19621 out.go:360] Setting OutFile to fd 1 ...
I1220 01:30:34.982161   19621 out.go:408] TERM=,COLORTERM=, which probably does not support color
I1220 01:30:34.982168   19621 out.go:374] Setting ErrFile to fd 2...
I1220 01:30:34.982173   19621 out.go:408] TERM=,COLORTERM=, which probably does not support color
I1220 01:30:34.982380   19621 root.go:338] Updating PATH: /home/minitest/minikube-integration/7cd9f41b7421760cf1f1eaa8725bdb975037b06d-7160/.minikube/bin
I1220 01:30:34.982992   19621 config.go:182] Loaded profile config "functional-562123": Driver=kvm2, ContainerRuntime=docker, KubernetesVersion=v1.35.0-rc.1
I1220 01:30:34.983087   19621 config.go:182] Loaded profile config "functional-562123": Driver=kvm2, ContainerRuntime=docker, KubernetesVersion=v1.35.0-rc.1
I1220 01:30:34.985360   19621 ssh_runner.go:195] Run: systemctl --version
I1220 01:30:34.987735   19621 main.go:144] libmachine: domain functional-562123 has defined MAC address 52:54:00:48:a7:38 in network mk-functional-562123
I1220 01:30:34.988128   19621 main.go:144] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:48:a7:38", ip: ""} in network mk-functional-562123: {Iface:virbr1 ExpiryTime:2025-12-20 02:27:41 +0000 UTC Type:0 Mac:52:54:00:48:a7:38 Iaid: IPaddr:192.168.39.29 Prefix:24 Hostname:functional-562123 Clientid:01:52:54:00:48:a7:38}
I1220 01:30:34.988149   19621 main.go:144] libmachine: domain functional-562123 has defined IP address 192.168.39.29 and MAC address 52:54:00:48:a7:38 in network mk-functional-562123
I1220 01:30:34.988300   19621 sshutil.go:53] new ssh client: &{IP:192.168.39.29 Port:22 SSHKeyPath:/home/minitest/minikube-integration/7cd9f41b7421760cf1f1eaa8725bdb975037b06d-7160/.minikube/machines/functional-562123/id_rsa Username:docker}
I1220 01:30:35.078607   19621 ssh_runner.go:195] Run: docker images --no-trunc --format "{{json .}}"
--- PASS: TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/ImageCommands/ImageListYaml (0.19s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/ImageCommands/ImageBuild (2.71s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/ImageCommands/ImageBuild
=== PAUSE TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/ImageCommands/ImageBuild

                                                
                                                

                                                
                                                
=== CONT  TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/ImageCommands/ImageBuild
functional_test.go:323: (dbg) Run:  out/minikube-linux-amd64 -p functional-562123 ssh pgrep buildkitd
functional_test.go:323: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-562123 ssh pgrep buildkitd: exit status 1 (162.802596ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
functional_test.go:330: (dbg) Run:  out/minikube-linux-amd64 -p functional-562123 image build -t localhost/my-image:functional-562123 testdata/build --alsologtostderr
functional_test.go:330: (dbg) Done: out/minikube-linux-amd64 -p functional-562123 image build -t localhost/my-image:functional-562123 testdata/build --alsologtostderr: (2.357136706s)
functional_test.go:338: (dbg) Stderr: out/minikube-linux-amd64 -p functional-562123 image build -t localhost/my-image:functional-562123 testdata/build --alsologtostderr:
I1220 01:30:35.345727   19643 out.go:360] Setting OutFile to fd 1 ...
I1220 01:30:35.346009   19643 out.go:408] TERM=,COLORTERM=, which probably does not support color
I1220 01:30:35.346021   19643 out.go:374] Setting ErrFile to fd 2...
I1220 01:30:35.346025   19643 out.go:408] TERM=,COLORTERM=, which probably does not support color
I1220 01:30:35.346223   19643 root.go:338] Updating PATH: /home/minitest/minikube-integration/7cd9f41b7421760cf1f1eaa8725bdb975037b06d-7160/.minikube/bin
I1220 01:30:35.346845   19643 config.go:182] Loaded profile config "functional-562123": Driver=kvm2, ContainerRuntime=docker, KubernetesVersion=v1.35.0-rc.1
I1220 01:30:35.347429   19643 config.go:182] Loaded profile config "functional-562123": Driver=kvm2, ContainerRuntime=docker, KubernetesVersion=v1.35.0-rc.1
I1220 01:30:35.349921   19643 ssh_runner.go:195] Run: systemctl --version
I1220 01:30:35.352658   19643 main.go:144] libmachine: domain functional-562123 has defined MAC address 52:54:00:48:a7:38 in network mk-functional-562123
I1220 01:30:35.353136   19643 main.go:144] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:48:a7:38", ip: ""} in network mk-functional-562123: {Iface:virbr1 ExpiryTime:2025-12-20 02:27:41 +0000 UTC Type:0 Mac:52:54:00:48:a7:38 Iaid: IPaddr:192.168.39.29 Prefix:24 Hostname:functional-562123 Clientid:01:52:54:00:48:a7:38}
I1220 01:30:35.353160   19643 main.go:144] libmachine: domain functional-562123 has defined IP address 192.168.39.29 and MAC address 52:54:00:48:a7:38 in network mk-functional-562123
I1220 01:30:35.353356   19643 sshutil.go:53] new ssh client: &{IP:192.168.39.29 Port:22 SSHKeyPath:/home/minitest/minikube-integration/7cd9f41b7421760cf1f1eaa8725bdb975037b06d-7160/.minikube/machines/functional-562123/id_rsa Username:docker}
I1220 01:30:35.446111   19643 build_images.go:162] Building image from path: /tmp/build.1341256925.tar
I1220 01:30:35.446189   19643 ssh_runner.go:195] Run: sudo mkdir -p /var/lib/minikube/build
I1220 01:30:35.465004   19643 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/build/build.1341256925.tar
I1220 01:30:35.470273   19643 ssh_runner.go:352] existence check for /var/lib/minikube/build/build.1341256925.tar: stat -c "%s %y" /var/lib/minikube/build/build.1341256925.tar: Process exited with status 1
stdout:

                                                
                                                
stderr:
stat: cannot statx '/var/lib/minikube/build/build.1341256925.tar': No such file or directory
I1220 01:30:35.470298   19643 ssh_runner.go:362] scp /tmp/build.1341256925.tar --> /var/lib/minikube/build/build.1341256925.tar (3072 bytes)
I1220 01:30:35.514533   19643 ssh_runner.go:195] Run: sudo mkdir -p /var/lib/minikube/build/build.1341256925
I1220 01:30:35.528836   19643 ssh_runner.go:195] Run: sudo tar -C /var/lib/minikube/build/build.1341256925 -xf /var/lib/minikube/build/build.1341256925.tar
I1220 01:30:35.544525   19643 docker.go:361] Building image: /var/lib/minikube/build/build.1341256925
I1220 01:30:35.544591   19643 ssh_runner.go:195] Run: docker build -t localhost/my-image:functional-562123 /var/lib/minikube/build/build.1341256925
#0 building with "default" instance using docker driver

                                                
                                                
#1 [internal] load build definition from Dockerfile
#1 transferring dockerfile: 97B done
#1 DONE 0.1s

                                                
                                                
#2 [internal] load metadata for gcr.io/k8s-minikube/busybox:latest
#2 DONE 0.5s

                                                
                                                
#3 [internal] load .dockerignore
#3 transferring context: 2B done
#3 DONE 0.0s

                                                
                                                
#4 [internal] load build context
#4 transferring context: 62B done
#4 DONE 0.0s

                                                
                                                
#5 [1/3] FROM gcr.io/k8s-minikube/busybox:latest@sha256:ca5ae90100d50772da31f3b5016209e25ad61972404e2ccd83d44f10dee7e79b
#5 resolve gcr.io/k8s-minikube/busybox:latest@sha256:ca5ae90100d50772da31f3b5016209e25ad61972404e2ccd83d44f10dee7e79b 0.0s done
#5 sha256:ca5ae90100d50772da31f3b5016209e25ad61972404e2ccd83d44f10dee7e79b 770B / 770B done
#5 sha256:62ffc2ed7554e4c6d360bce40bbcf196573dd27c4ce080641a2c59867e732dee 527B / 527B done
#5 sha256:beae173ccac6ad749f76713cf4440fe3d21d1043fe616dfbe30775815d1d0f6a 1.46kB / 1.46kB done
#5 sha256:5cc84ad355aaa64f46ea9c7bbcc319a9d808ab15088a27209c9e70ef86e5a2aa 0B / 772.79kB 0.1s
#5 extracting sha256:5cc84ad355aaa64f46ea9c7bbcc319a9d808ab15088a27209c9e70ef86e5a2aa
#5 sha256:5cc84ad355aaa64f46ea9c7bbcc319a9d808ab15088a27209c9e70ef86e5a2aa 772.79kB / 772.79kB 0.1s done
#5 extracting sha256:5cc84ad355aaa64f46ea9c7bbcc319a9d808ab15088a27209c9e70ef86e5a2aa 0.1s done
#5 DONE 0.3s

                                                
                                                
#6 [2/3] RUN true
#6 DONE 0.6s

                                                
                                                
#7 [3/3] ADD content.txt /
#7 DONE 0.1s

                                                
                                                
#8 exporting to image
#8 exporting layers 0.1s done
#8 writing image sha256:985d3e9c4f1d573f110dfccb11ad08145462022b6f15fe269dcb269723166c98 done
#8 naming to localhost/my-image:functional-562123 done
#8 DONE 0.1s
I1220 01:30:37.585135   19643 ssh_runner.go:235] Completed: docker build -t localhost/my-image:functional-562123 /var/lib/minikube/build/build.1341256925: (2.040499517s)
I1220 01:30:37.585250   19643 ssh_runner.go:195] Run: sudo rm -rf /var/lib/minikube/build/build.1341256925
I1220 01:30:37.603097   19643 ssh_runner.go:195] Run: sudo rm -f /var/lib/minikube/build/build.1341256925.tar
I1220 01:30:37.620813   19643 build_images.go:218] Built localhost/my-image:functional-562123 from /tmp/build.1341256925.tar
I1220 01:30:37.620850   19643 build_images.go:134] succeeded building to: functional-562123
I1220 01:30:37.620857   19643 build_images.go:135] failed building to: 
functional_test.go:466: (dbg) Run:  out/minikube-linux-amd64 -p functional-562123 image ls
--- PASS: TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/ImageCommands/ImageBuild (2.71s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/ImageCommands/Setup (0.34s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/ImageCommands/Setup
functional_test.go:357: (dbg) Run:  docker pull kicbase/echo-server:1.0
functional_test.go:362: (dbg) Run:  docker tag kicbase/echo-server:1.0 kicbase/echo-server:functional-562123
--- PASS: TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/ImageCommands/Setup (0.34s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/ImageCommands/ImageLoadDaemon (0.78s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/ImageCommands/ImageLoadDaemon
functional_test.go:370: (dbg) Run:  out/minikube-linux-amd64 -p functional-562123 image load --daemon kicbase/echo-server:functional-562123 --alsologtostderr
functional_test.go:466: (dbg) Run:  out/minikube-linux-amd64 -p functional-562123 image ls
--- PASS: TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/ImageCommands/ImageLoadDaemon (0.78s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/ImageCommands/ImageReloadDaemon (0.66s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/ImageCommands/ImageReloadDaemon
functional_test.go:380: (dbg) Run:  out/minikube-linux-amd64 -p functional-562123 image load --daemon kicbase/echo-server:functional-562123 --alsologtostderr
functional_test.go:466: (dbg) Run:  out/minikube-linux-amd64 -p functional-562123 image ls
--- PASS: TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/ImageCommands/ImageReloadDaemon (0.66s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/ImageCommands/ImageTagAndLoadDaemon (0.94s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/ImageCommands/ImageTagAndLoadDaemon
functional_test.go:250: (dbg) Run:  docker pull kicbase/echo-server:latest
functional_test.go:255: (dbg) Run:  docker tag kicbase/echo-server:latest kicbase/echo-server:functional-562123
functional_test.go:260: (dbg) Run:  out/minikube-linux-amd64 -p functional-562123 image load --daemon kicbase/echo-server:functional-562123 --alsologtostderr
functional_test.go:466: (dbg) Run:  out/minikube-linux-amd64 -p functional-562123 image ls
--- PASS: TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/ImageCommands/ImageTagAndLoadDaemon (0.94s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/ProfileCmd/profile_not_create (0.34s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/ProfileCmd/profile_not_create
functional_test.go:1285: (dbg) Run:  out/minikube-linux-amd64 profile lis
functional_test.go:1290: (dbg) Run:  out/minikube-linux-amd64 profile list --output json
--- PASS: TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/ProfileCmd/profile_not_create (0.34s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/ProfileCmd/profile_list (0.34s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/ProfileCmd/profile_list
functional_test.go:1325: (dbg) Run:  out/minikube-linux-amd64 profile list
functional_test.go:1330: Took "279.369928ms" to run "out/minikube-linux-amd64 profile list"
functional_test.go:1339: (dbg) Run:  out/minikube-linux-amd64 profile list -l
functional_test.go:1344: Took "63.584745ms" to run "out/minikube-linux-amd64 profile list -l"
--- PASS: TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/ProfileCmd/profile_list (0.34s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/ProfileCmd/profile_json_output (0.31s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/ProfileCmd/profile_json_output
functional_test.go:1376: (dbg) Run:  out/minikube-linux-amd64 profile list -o json
functional_test.go:1381: Took "247.265901ms" to run "out/minikube-linux-amd64 profile list -o json"
functional_test.go:1389: (dbg) Run:  out/minikube-linux-amd64 profile list -o json --light
functional_test.go:1394: Took "60.595804ms" to run "out/minikube-linux-amd64 profile list -o json --light"
--- PASS: TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/ProfileCmd/profile_json_output (0.31s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/ImageCommands/ImageSaveToFile (0.32s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/ImageCommands/ImageSaveToFile
functional_test.go:395: (dbg) Run:  out/minikube-linux-amd64 -p functional-562123 image save kicbase/echo-server:functional-562123 /home/minitest/minikube/echo-server-save.tar --alsologtostderr
--- PASS: TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/ImageCommands/ImageSaveToFile (0.32s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/MountCmd/any-port (6.03s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/MountCmd/any-port
functional_test_mount_test.go:74: (dbg) daemon: [out/minikube-linux-amd64 mount -p functional-562123 /tmp/TestFunctionalNewestKubernetesVersionv1.35.0-rc.1parallelMoun2965669198/001:/mount-9p --alsologtostderr -v=1]
functional_test_mount_test.go:108: wrote "test-1766194222100792743" to /tmp/TestFunctionalNewestKubernetesVersionv1.35.0-rc.1parallelMoun2965669198/001/created-by-test
functional_test_mount_test.go:108: wrote "test-1766194222100792743" to /tmp/TestFunctionalNewestKubernetesVersionv1.35.0-rc.1parallelMoun2965669198/001/created-by-test-removed-by-pod
functional_test_mount_test.go:108: wrote "test-1766194222100792743" to /tmp/TestFunctionalNewestKubernetesVersionv1.35.0-rc.1parallelMoun2965669198/001/test-1766194222100792743
functional_test_mount_test.go:116: (dbg) Run:  out/minikube-linux-amd64 -p functional-562123 ssh "findmnt -T /mount-9p | grep 9p"
functional_test_mount_test.go:116: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-562123 ssh "findmnt -T /mount-9p | grep 9p": exit status 1 (161.955668ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
I1220 01:30:22.263066   13018 retry.go:31] will retry after 545.952658ms: exit status 1
functional_test_mount_test.go:116: (dbg) Run:  out/minikube-linux-amd64 -p functional-562123 ssh "findmnt -T /mount-9p | grep 9p"
functional_test_mount_test.go:130: (dbg) Run:  out/minikube-linux-amd64 -p functional-562123 ssh -- ls -la /mount-9p
functional_test_mount_test.go:134: guest mount directory contents
total 2
-rw-r--r-- 1 docker docker 24 Dec 20 01:30 created-by-test
-rw-r--r-- 1 docker docker 24 Dec 20 01:30 created-by-test-removed-by-pod
-rw-r--r-- 1 docker docker 24 Dec 20 01:30 test-1766194222100792743
functional_test_mount_test.go:138: (dbg) Run:  out/minikube-linux-amd64 -p functional-562123 ssh cat /mount-9p/test-1766194222100792743
functional_test_mount_test.go:149: (dbg) Run:  kubectl --context functional-562123 replace --force -f testdata/busybox-mount-test.yaml
functional_test_mount_test.go:154: (dbg) TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/MountCmd/any-port: waiting 4m0s for pods matching "integration-test=busybox-mount" in namespace "default" ...
helpers_test.go:353: "busybox-mount" [1da3c721-a8b8-478b-8290-5cbcf17f4807] Pending
helpers_test.go:353: "busybox-mount" [1da3c721-a8b8-478b-8290-5cbcf17f4807] Pending / Ready:ContainersNotReady (containers with unready status: [mount-munger]) / ContainersReady:ContainersNotReady (containers with unready status: [mount-munger])
helpers_test.go:353: "busybox-mount" [1da3c721-a8b8-478b-8290-5cbcf17f4807] Pending / Initialized:PodCompleted / Ready:PodCompleted / ContainersReady:PodCompleted
helpers_test.go:353: "busybox-mount" [1da3c721-a8b8-478b-8290-5cbcf17f4807] Succeeded / Initialized:PodCompleted / Ready:PodCompleted / ContainersReady:PodCompleted
functional_test_mount_test.go:154: (dbg) TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/MountCmd/any-port: integration-test=busybox-mount healthy within 4.004961159s
functional_test_mount_test.go:170: (dbg) Run:  kubectl --context functional-562123 logs busybox-mount
functional_test_mount_test.go:182: (dbg) Run:  out/minikube-linux-amd64 -p functional-562123 ssh stat /mount-9p/created-by-test
functional_test_mount_test.go:182: (dbg) Run:  out/minikube-linux-amd64 -p functional-562123 ssh stat /mount-9p/created-by-pod
functional_test_mount_test.go:91: (dbg) Run:  out/minikube-linux-amd64 -p functional-562123 ssh "sudo umount -f /mount-9p"
functional_test_mount_test.go:95: (dbg) stopping [out/minikube-linux-amd64 mount -p functional-562123 /tmp/TestFunctionalNewestKubernetesVersionv1.35.0-rc.1parallelMoun2965669198/001:/mount-9p --alsologtostderr -v=1] ...
--- PASS: TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/MountCmd/any-port (6.03s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/ImageCommands/ImageRemove (0.35s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/ImageCommands/ImageRemove
functional_test.go:407: (dbg) Run:  out/minikube-linux-amd64 -p functional-562123 image rm kicbase/echo-server:functional-562123 --alsologtostderr
functional_test.go:466: (dbg) Run:  out/minikube-linux-amd64 -p functional-562123 image ls
--- PASS: TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/ImageCommands/ImageRemove (0.35s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/ImageCommands/ImageLoadFromFile (0.54s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/ImageCommands/ImageLoadFromFile
functional_test.go:424: (dbg) Run:  out/minikube-linux-amd64 -p functional-562123 image load /home/minitest/minikube/echo-server-save.tar --alsologtostderr
functional_test.go:466: (dbg) Run:  out/minikube-linux-amd64 -p functional-562123 image ls
--- PASS: TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/ImageCommands/ImageLoadFromFile (0.54s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/ImageCommands/ImageSaveDaemon (0.73s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/ImageCommands/ImageSaveDaemon
functional_test.go:434: (dbg) Run:  docker rmi kicbase/echo-server:functional-562123
functional_test.go:439: (dbg) Run:  out/minikube-linux-amd64 -p functional-562123 image save --daemon kicbase/echo-server:functional-562123 --alsologtostderr
functional_test.go:447: (dbg) Run:  docker image inspect kicbase/echo-server:functional-562123
--- PASS: TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/ImageCommands/ImageSaveDaemon (0.73s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/DockerEnv/bash (0.7s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/DockerEnv/bash
functional_test.go:514: (dbg) Run:  /bin/bash -c "eval $(out/minikube-linux-amd64 -p functional-562123 docker-env) && out/minikube-linux-amd64 status -p functional-562123"
E1220 01:30:25.241418   13018 cert_rotation.go:172] "Loading client cert failed" err="open /home/minitest/minikube-integration/7cd9f41b7421760cf1f1eaa8725bdb975037b06d-7160/.minikube/profiles/addons-616728/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
functional_test.go:537: (dbg) Run:  /bin/bash -c "eval $(out/minikube-linux-amd64 -p functional-562123 docker-env) && docker images"
--- PASS: TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/DockerEnv/bash (0.70s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/UpdateContextCmd/no_changes (0.07s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/UpdateContextCmd/no_changes
=== PAUSE TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/UpdateContextCmd/no_changes

                                                
                                                

                                                
                                                
=== CONT  TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/UpdateContextCmd/no_changes
functional_test.go:2124: (dbg) Run:  out/minikube-linux-amd64 -p functional-562123 update-context --alsologtostderr -v=2
--- PASS: TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/UpdateContextCmd/no_changes (0.07s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/UpdateContextCmd/no_minikube_cluster (0.07s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/UpdateContextCmd/no_minikube_cluster
=== PAUSE TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/UpdateContextCmd/no_minikube_cluster

                                                
                                                

                                                
                                                
=== CONT  TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/UpdateContextCmd/no_minikube_cluster
functional_test.go:2124: (dbg) Run:  out/minikube-linux-amd64 -p functional-562123 update-context --alsologtostderr -v=2
--- PASS: TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/UpdateContextCmd/no_minikube_cluster (0.07s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/UpdateContextCmd/no_clusters (0.07s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/UpdateContextCmd/no_clusters
=== PAUSE TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/UpdateContextCmd/no_clusters

                                                
                                                

                                                
                                                
=== CONT  TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/UpdateContextCmd/no_clusters
functional_test.go:2124: (dbg) Run:  out/minikube-linux-amd64 -p functional-562123 update-context --alsologtostderr -v=2
--- PASS: TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/UpdateContextCmd/no_clusters (0.07s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/ServiceCmd/List (0.22s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/ServiceCmd/List
functional_test.go:1469: (dbg) Run:  out/minikube-linux-amd64 -p functional-562123 service list
--- PASS: TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/ServiceCmd/List (0.22s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/ServiceCmd/JSONOutput (0.25s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/ServiceCmd/JSONOutput
functional_test.go:1499: (dbg) Run:  out/minikube-linux-amd64 -p functional-562123 service list -o json
functional_test.go:1504: Took "247.152298ms" to run "out/minikube-linux-amd64 -p functional-562123 service list -o json"
--- PASS: TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/ServiceCmd/JSONOutput (0.25s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/ServiceCmd/HTTPS (0.25s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/ServiceCmd/HTTPS
functional_test.go:1519: (dbg) Run:  out/minikube-linux-amd64 -p functional-562123 service --namespace=default --https --url hello-node
functional_test.go:1532: found endpoint: https://192.168.39.29:32344
--- PASS: TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/ServiceCmd/HTTPS (0.25s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/ServiceCmd/Format (0.29s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/ServiceCmd/Format
functional_test.go:1550: (dbg) Run:  out/minikube-linux-amd64 -p functional-562123 service hello-node --url --format={{.IP}}
--- PASS: TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/ServiceCmd/Format (0.29s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/ServiceCmd/URL (0.25s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/ServiceCmd/URL
functional_test.go:1569: (dbg) Run:  out/minikube-linux-amd64 -p functional-562123 service hello-node --url
functional_test.go:1575: found endpoint for hello-node: http://192.168.39.29:32344
--- PASS: TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/ServiceCmd/URL (0.25s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/MountCmd/specific-port (1.32s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/MountCmd/specific-port
functional_test_mount_test.go:219: (dbg) daemon: [out/minikube-linux-amd64 mount -p functional-562123 /tmp/TestFunctionalNewestKubernetesVersionv1.35.0-rc.1parallelMoun304771557/001:/mount-9p --alsologtostderr -v=1 --port 39679]
functional_test_mount_test.go:249: (dbg) Run:  out/minikube-linux-amd64 -p functional-562123 ssh "findmnt -T /mount-9p | grep 9p"
functional_test_mount_test.go:249: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-562123 ssh "findmnt -T /mount-9p | grep 9p": exit status 1 (164.307339ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
I1220 01:30:28.299379   13018 retry.go:31] will retry after 410.024253ms: exit status 1
functional_test_mount_test.go:249: (dbg) Run:  out/minikube-linux-amd64 -p functional-562123 ssh "findmnt -T /mount-9p | grep 9p"
functional_test_mount_test.go:263: (dbg) Run:  out/minikube-linux-amd64 -p functional-562123 ssh -- ls -la /mount-9p
functional_test_mount_test.go:267: guest mount directory contents
total 0
functional_test_mount_test.go:269: (dbg) stopping [out/minikube-linux-amd64 mount -p functional-562123 /tmp/TestFunctionalNewestKubernetesVersionv1.35.0-rc.1parallelMoun304771557/001:/mount-9p --alsologtostderr -v=1 --port 39679] ...
I1220 01:30:29.127439   13018 detect.go:223] nested VM detected
functional_test_mount_test.go:270: reading mount text
functional_test_mount_test.go:284: done reading mount text
functional_test_mount_test.go:236: (dbg) Run:  out/minikube-linux-amd64 -p functional-562123 ssh "sudo umount -f /mount-9p"
functional_test_mount_test.go:236: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-562123 ssh "sudo umount -f /mount-9p": exit status 1 (192.133254ms)

                                                
                                                
-- stdout --
	umount: /mount-9p: not mounted.

                                                
                                                
-- /stdout --
** stderr ** 
	ssh: Process exited with status 32

                                                
                                                
** /stderr **
functional_test_mount_test.go:238: "out/minikube-linux-amd64 -p functional-562123 ssh \"sudo umount -f /mount-9p\"": exit status 1
functional_test_mount_test.go:240: (dbg) stopping [out/minikube-linux-amd64 mount -p functional-562123 /tmp/TestFunctionalNewestKubernetesVersionv1.35.0-rc.1parallelMoun304771557/001:/mount-9p --alsologtostderr -v=1 --port 39679] ...
--- PASS: TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/MountCmd/specific-port (1.32s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/MountCmd/VerifyCleanup (0.98s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/MountCmd/VerifyCleanup
functional_test_mount_test.go:304: (dbg) daemon: [out/minikube-linux-amd64 mount -p functional-562123 /tmp/TestFunctionalNewestKubernetesVersionv1.35.0-rc.1parallelMoun441405451/001:/mount1 --alsologtostderr -v=1]
functional_test_mount_test.go:304: (dbg) daemon: [out/minikube-linux-amd64 mount -p functional-562123 /tmp/TestFunctionalNewestKubernetesVersionv1.35.0-rc.1parallelMoun441405451/001:/mount2 --alsologtostderr -v=1]
functional_test_mount_test.go:304: (dbg) daemon: [out/minikube-linux-amd64 mount -p functional-562123 /tmp/TestFunctionalNewestKubernetesVersionv1.35.0-rc.1parallelMoun441405451/001:/mount3 --alsologtostderr -v=1]
functional_test_mount_test.go:331: (dbg) Run:  out/minikube-linux-amd64 -p functional-562123 ssh "findmnt -T" /mount1
functional_test_mount_test.go:331: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-562123 ssh "findmnt -T" /mount1: exit status 1 (186.922626ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
I1220 01:30:29.645304   13018 retry.go:31] will retry after 288.485642ms: exit status 1
functional_test_mount_test.go:331: (dbg) Run:  out/minikube-linux-amd64 -p functional-562123 ssh "findmnt -T" /mount1
functional_test_mount_test.go:331: (dbg) Run:  out/minikube-linux-amd64 -p functional-562123 ssh "findmnt -T" /mount2
functional_test_mount_test.go:331: (dbg) Run:  out/minikube-linux-amd64 -p functional-562123 ssh "findmnt -T" /mount3
functional_test_mount_test.go:376: (dbg) Run:  out/minikube-linux-amd64 mount -p functional-562123 --kill=true
functional_test_mount_test.go:319: (dbg) stopping [out/minikube-linux-amd64 mount -p functional-562123 /tmp/TestFunctionalNewestKubernetesVersionv1.35.0-rc.1parallelMoun441405451/001:/mount1 --alsologtostderr -v=1] ...
helpers_test.go:508: unable to find parent, assuming dead: process does not exist
functional_test_mount_test.go:319: (dbg) stopping [out/minikube-linux-amd64 mount -p functional-562123 /tmp/TestFunctionalNewestKubernetesVersionv1.35.0-rc.1parallelMoun441405451/001:/mount2 --alsologtostderr -v=1] ...
helpers_test.go:508: unable to find parent, assuming dead: process does not exist
functional_test_mount_test.go:319: (dbg) stopping [out/minikube-linux-amd64 mount -p functional-562123 /tmp/TestFunctionalNewestKubernetesVersionv1.35.0-rc.1parallelMoun441405451/001:/mount3 --alsologtostderr -v=1] ...
helpers_test.go:508: unable to find parent, assuming dead: process does not exist
--- PASS: TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/MountCmd/VerifyCleanup (0.98s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/delete_echo-server_images (0.04s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/delete_echo-server_images
functional_test.go:205: (dbg) Run:  docker rmi -f kicbase/echo-server:1.0
functional_test.go:205: (dbg) Run:  docker rmi -f kicbase/echo-server:functional-562123
--- PASS: TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/delete_echo-server_images (0.04s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/delete_my-image_image (0.02s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/delete_my-image_image
functional_test.go:213: (dbg) Run:  docker rmi -f localhost/my-image:functional-562123
--- PASS: TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/delete_my-image_image (0.02s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/delete_minikube_cached_images (0.02s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/delete_minikube_cached_images
functional_test.go:221: (dbg) Run:  docker rmi -f minikube-local-cache-test:functional-562123
--- PASS: TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/delete_minikube_cached_images (0.02s)

                                                
                                    
x
+
TestMultiControlPlane/serial/StartCluster (242.31s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/StartCluster
ha_test.go:101: (dbg) Run:  out/minikube-linux-amd64 -p ha-003647 start --ha --memory 3072 --wait true --alsologtostderr -v 5 --driver=kvm2  --container-runtime=docker
E1220 01:31:34.991533   13018 cert_rotation.go:172] "Loading client cert failed" err="open /home/minitest/minikube-integration/7cd9f41b7421760cf1f1eaa8725bdb975037b06d-7160/.minikube/profiles/functional-281340/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1220 01:31:34.996861   13018 cert_rotation.go:172] "Loading client cert failed" err="open /home/minitest/minikube-integration/7cd9f41b7421760cf1f1eaa8725bdb975037b06d-7160/.minikube/profiles/functional-281340/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1220 01:31:35.007169   13018 cert_rotation.go:172] "Loading client cert failed" err="open /home/minitest/minikube-integration/7cd9f41b7421760cf1f1eaa8725bdb975037b06d-7160/.minikube/profiles/functional-281340/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1220 01:31:35.027537   13018 cert_rotation.go:172] "Loading client cert failed" err="open /home/minitest/minikube-integration/7cd9f41b7421760cf1f1eaa8725bdb975037b06d-7160/.minikube/profiles/functional-281340/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1220 01:31:35.067906   13018 cert_rotation.go:172] "Loading client cert failed" err="open /home/minitest/minikube-integration/7cd9f41b7421760cf1f1eaa8725bdb975037b06d-7160/.minikube/profiles/functional-281340/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1220 01:31:35.148301   13018 cert_rotation.go:172] "Loading client cert failed" err="open /home/minitest/minikube-integration/7cd9f41b7421760cf1f1eaa8725bdb975037b06d-7160/.minikube/profiles/functional-281340/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1220 01:31:35.308777   13018 cert_rotation.go:172] "Loading client cert failed" err="open /home/minitest/minikube-integration/7cd9f41b7421760cf1f1eaa8725bdb975037b06d-7160/.minikube/profiles/functional-281340/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1220 01:31:35.629467   13018 cert_rotation.go:172] "Loading client cert failed" err="open /home/minitest/minikube-integration/7cd9f41b7421760cf1f1eaa8725bdb975037b06d-7160/.minikube/profiles/functional-281340/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1220 01:31:36.270035   13018 cert_rotation.go:172] "Loading client cert failed" err="open /home/minitest/minikube-integration/7cd9f41b7421760cf1f1eaa8725bdb975037b06d-7160/.minikube/profiles/functional-281340/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1220 01:31:37.550552   13018 cert_rotation.go:172] "Loading client cert failed" err="open /home/minitest/minikube-integration/7cd9f41b7421760cf1f1eaa8725bdb975037b06d-7160/.minikube/profiles/functional-281340/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1220 01:31:40.111313   13018 cert_rotation.go:172] "Loading client cert failed" err="open /home/minitest/minikube-integration/7cd9f41b7421760cf1f1eaa8725bdb975037b06d-7160/.minikube/profiles/functional-281340/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1220 01:31:45.231679   13018 cert_rotation.go:172] "Loading client cert failed" err="open /home/minitest/minikube-integration/7cd9f41b7421760cf1f1eaa8725bdb975037b06d-7160/.minikube/profiles/functional-281340/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1220 01:31:55.472863   13018 cert_rotation.go:172] "Loading client cert failed" err="open /home/minitest/minikube-integration/7cd9f41b7421760cf1f1eaa8725bdb975037b06d-7160/.minikube/profiles/functional-281340/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1220 01:32:15.953402   13018 cert_rotation.go:172] "Loading client cert failed" err="open /home/minitest/minikube-integration/7cd9f41b7421760cf1f1eaa8725bdb975037b06d-7160/.minikube/profiles/functional-281340/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1220 01:32:56.914630   13018 cert_rotation.go:172] "Loading client cert failed" err="open /home/minitest/minikube-integration/7cd9f41b7421760cf1f1eaa8725bdb975037b06d-7160/.minikube/profiles/functional-281340/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1220 01:34:18.835137   13018 cert_rotation.go:172] "Loading client cert failed" err="open /home/minitest/minikube-integration/7cd9f41b7421760cf1f1eaa8725bdb975037b06d-7160/.minikube/profiles/functional-281340/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1220 01:34:57.552548   13018 cert_rotation.go:172] "Loading client cert failed" err="open /home/minitest/minikube-integration/7cd9f41b7421760cf1f1eaa8725bdb975037b06d-7160/.minikube/profiles/addons-616728/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
ha_test.go:101: (dbg) Done: out/minikube-linux-amd64 -p ha-003647 start --ha --memory 3072 --wait true --alsologtostderr -v 5 --driver=kvm2  --container-runtime=docker: (4m1.726536973s)
ha_test.go:107: (dbg) Run:  out/minikube-linux-amd64 -p ha-003647 status --alsologtostderr -v 5
--- PASS: TestMultiControlPlane/serial/StartCluster (242.31s)

                                                
                                    
x
+
TestMultiControlPlane/serial/DeployApp (4.51s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/DeployApp
ha_test.go:128: (dbg) Run:  out/minikube-linux-amd64 -p ha-003647 kubectl -- apply -f ./testdata/ha/ha-pod-dns-test.yaml
ha_test.go:133: (dbg) Run:  out/minikube-linux-amd64 -p ha-003647 kubectl -- rollout status deployment/busybox
ha_test.go:133: (dbg) Done: out/minikube-linux-amd64 -p ha-003647 kubectl -- rollout status deployment/busybox: (2.105259791s)
ha_test.go:140: (dbg) Run:  out/minikube-linux-amd64 -p ha-003647 kubectl -- get pods -o jsonpath='{.items[*].status.podIP}'
ha_test.go:163: (dbg) Run:  out/minikube-linux-amd64 -p ha-003647 kubectl -- get pods -o jsonpath='{.items[*].metadata.name}'
ha_test.go:171: (dbg) Run:  out/minikube-linux-amd64 -p ha-003647 kubectl -- exec busybox-7b57f96db7-cjl9k -- nslookup kubernetes.io
ha_test.go:171: (dbg) Run:  out/minikube-linux-amd64 -p ha-003647 kubectl -- exec busybox-7b57f96db7-gprqp -- nslookup kubernetes.io
ha_test.go:171: (dbg) Run:  out/minikube-linux-amd64 -p ha-003647 kubectl -- exec busybox-7b57f96db7-tjxk9 -- nslookup kubernetes.io
ha_test.go:181: (dbg) Run:  out/minikube-linux-amd64 -p ha-003647 kubectl -- exec busybox-7b57f96db7-cjl9k -- nslookup kubernetes.default
ha_test.go:181: (dbg) Run:  out/minikube-linux-amd64 -p ha-003647 kubectl -- exec busybox-7b57f96db7-gprqp -- nslookup kubernetes.default
ha_test.go:181: (dbg) Run:  out/minikube-linux-amd64 -p ha-003647 kubectl -- exec busybox-7b57f96db7-tjxk9 -- nslookup kubernetes.default
ha_test.go:189: (dbg) Run:  out/minikube-linux-amd64 -p ha-003647 kubectl -- exec busybox-7b57f96db7-cjl9k -- nslookup kubernetes.default.svc.cluster.local
ha_test.go:189: (dbg) Run:  out/minikube-linux-amd64 -p ha-003647 kubectl -- exec busybox-7b57f96db7-gprqp -- nslookup kubernetes.default.svc.cluster.local
ha_test.go:189: (dbg) Run:  out/minikube-linux-amd64 -p ha-003647 kubectl -- exec busybox-7b57f96db7-tjxk9 -- nslookup kubernetes.default.svc.cluster.local
--- PASS: TestMultiControlPlane/serial/DeployApp (4.51s)

                                                
                                    
x
+
TestMultiControlPlane/serial/PingHostFromPods (1.33s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/PingHostFromPods
ha_test.go:199: (dbg) Run:  out/minikube-linux-amd64 -p ha-003647 kubectl -- get pods -o jsonpath='{.items[*].metadata.name}'
ha_test.go:207: (dbg) Run:  out/minikube-linux-amd64 -p ha-003647 kubectl -- exec busybox-7b57f96db7-cjl9k -- sh -c "nslookup host.minikube.internal | awk 'NR==5' | cut -d' ' -f3"
ha_test.go:218: (dbg) Run:  out/minikube-linux-amd64 -p ha-003647 kubectl -- exec busybox-7b57f96db7-cjl9k -- sh -c "ping -c 1 192.168.39.1"
E1220 01:35:18.900861   13018 cert_rotation.go:172] "Loading client cert failed" err="open /home/minitest/minikube-integration/7cd9f41b7421760cf1f1eaa8725bdb975037b06d-7160/.minikube/profiles/functional-562123/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1220 01:35:18.906178   13018 cert_rotation.go:172] "Loading client cert failed" err="open /home/minitest/minikube-integration/7cd9f41b7421760cf1f1eaa8725bdb975037b06d-7160/.minikube/profiles/functional-562123/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1220 01:35:18.916512   13018 cert_rotation.go:172] "Loading client cert failed" err="open /home/minitest/minikube-integration/7cd9f41b7421760cf1f1eaa8725bdb975037b06d-7160/.minikube/profiles/functional-562123/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1220 01:35:18.937702   13018 cert_rotation.go:172] "Loading client cert failed" err="open /home/minitest/minikube-integration/7cd9f41b7421760cf1f1eaa8725bdb975037b06d-7160/.minikube/profiles/functional-562123/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
ha_test.go:207: (dbg) Run:  out/minikube-linux-amd64 -p ha-003647 kubectl -- exec busybox-7b57f96db7-gprqp -- sh -c "nslookup host.minikube.internal | awk 'NR==5' | cut -d' ' -f3"
E1220 01:35:18.978184   13018 cert_rotation.go:172] "Loading client cert failed" err="open /home/minitest/minikube-integration/7cd9f41b7421760cf1f1eaa8725bdb975037b06d-7160/.minikube/profiles/functional-562123/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1220 01:35:19.058541   13018 cert_rotation.go:172] "Loading client cert failed" err="open /home/minitest/minikube-integration/7cd9f41b7421760cf1f1eaa8725bdb975037b06d-7160/.minikube/profiles/functional-562123/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
ha_test.go:218: (dbg) Run:  out/minikube-linux-amd64 -p ha-003647 kubectl -- exec busybox-7b57f96db7-gprqp -- sh -c "ping -c 1 192.168.39.1"
E1220 01:35:19.219543   13018 cert_rotation.go:172] "Loading client cert failed" err="open /home/minitest/minikube-integration/7cd9f41b7421760cf1f1eaa8725bdb975037b06d-7160/.minikube/profiles/functional-562123/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
ha_test.go:207: (dbg) Run:  out/minikube-linux-amd64 -p ha-003647 kubectl -- exec busybox-7b57f96db7-tjxk9 -- sh -c "nslookup host.minikube.internal | awk 'NR==5' | cut -d' ' -f3"
E1220 01:35:19.539765   13018 cert_rotation.go:172] "Loading client cert failed" err="open /home/minitest/minikube-integration/7cd9f41b7421760cf1f1eaa8725bdb975037b06d-7160/.minikube/profiles/functional-562123/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
ha_test.go:218: (dbg) Run:  out/minikube-linux-amd64 -p ha-003647 kubectl -- exec busybox-7b57f96db7-tjxk9 -- sh -c "ping -c 1 192.168.39.1"
--- PASS: TestMultiControlPlane/serial/PingHostFromPods (1.33s)

                                                
                                    
x
+
TestMultiControlPlane/serial/AddWorkerNode (49.03s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/AddWorkerNode
ha_test.go:228: (dbg) Run:  out/minikube-linux-amd64 -p ha-003647 node add --alsologtostderr -v 5
E1220 01:35:20.180815   13018 cert_rotation.go:172] "Loading client cert failed" err="open /home/minitest/minikube-integration/7cd9f41b7421760cf1f1eaa8725bdb975037b06d-7160/.minikube/profiles/functional-562123/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1220 01:35:21.461728   13018 cert_rotation.go:172] "Loading client cert failed" err="open /home/minitest/minikube-integration/7cd9f41b7421760cf1f1eaa8725bdb975037b06d-7160/.minikube/profiles/functional-562123/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1220 01:35:24.022160   13018 cert_rotation.go:172] "Loading client cert failed" err="open /home/minitest/minikube-integration/7cd9f41b7421760cf1f1eaa8725bdb975037b06d-7160/.minikube/profiles/functional-562123/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1220 01:35:29.143130   13018 cert_rotation.go:172] "Loading client cert failed" err="open /home/minitest/minikube-integration/7cd9f41b7421760cf1f1eaa8725bdb975037b06d-7160/.minikube/profiles/functional-562123/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1220 01:35:39.383393   13018 cert_rotation.go:172] "Loading client cert failed" err="open /home/minitest/minikube-integration/7cd9f41b7421760cf1f1eaa8725bdb975037b06d-7160/.minikube/profiles/functional-562123/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1220 01:35:59.864017   13018 cert_rotation.go:172] "Loading client cert failed" err="open /home/minitest/minikube-integration/7cd9f41b7421760cf1f1eaa8725bdb975037b06d-7160/.minikube/profiles/functional-562123/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
ha_test.go:228: (dbg) Done: out/minikube-linux-amd64 -p ha-003647 node add --alsologtostderr -v 5: (48.361793927s)
ha_test.go:234: (dbg) Run:  out/minikube-linux-amd64 -p ha-003647 status --alsologtostderr -v 5
--- PASS: TestMultiControlPlane/serial/AddWorkerNode (49.03s)

                                                
                                    
x
+
TestMultiControlPlane/serial/NodeLabels (0.07s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/NodeLabels
ha_test.go:255: (dbg) Run:  kubectl --context ha-003647 get nodes -o "jsonpath=[{range .items[*]}{.metadata.labels},{end}]"
--- PASS: TestMultiControlPlane/serial/NodeLabels (0.07s)

                                                
                                    
x
+
TestMultiControlPlane/serial/HAppyAfterClusterStart (0.67s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/HAppyAfterClusterStart
ha_test.go:281: (dbg) Run:  out/minikube-linux-amd64 profile list --output json
--- PASS: TestMultiControlPlane/serial/HAppyAfterClusterStart (0.67s)

                                                
                                    
x
+
TestMultiControlPlane/serial/CopyFile (10.89s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/CopyFile
ha_test.go:328: (dbg) Run:  out/minikube-linux-amd64 -p ha-003647 status --output json --alsologtostderr -v 5
helpers_test.go:574: (dbg) Run:  out/minikube-linux-amd64 -p ha-003647 cp testdata/cp-test.txt ha-003647:/home/docker/cp-test.txt
helpers_test.go:552: (dbg) Run:  out/minikube-linux-amd64 -p ha-003647 ssh -n ha-003647 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:574: (dbg) Run:  out/minikube-linux-amd64 -p ha-003647 cp ha-003647:/home/docker/cp-test.txt /tmp/TestMultiControlPlaneserialCopyFile4228736321/001/cp-test_ha-003647.txt
helpers_test.go:552: (dbg) Run:  out/minikube-linux-amd64 -p ha-003647 ssh -n ha-003647 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:574: (dbg) Run:  out/minikube-linux-amd64 -p ha-003647 cp ha-003647:/home/docker/cp-test.txt ha-003647-m02:/home/docker/cp-test_ha-003647_ha-003647-m02.txt
helpers_test.go:552: (dbg) Run:  out/minikube-linux-amd64 -p ha-003647 ssh -n ha-003647 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:552: (dbg) Run:  out/minikube-linux-amd64 -p ha-003647 ssh -n ha-003647-m02 "sudo cat /home/docker/cp-test_ha-003647_ha-003647-m02.txt"
helpers_test.go:574: (dbg) Run:  out/minikube-linux-amd64 -p ha-003647 cp ha-003647:/home/docker/cp-test.txt ha-003647-m03:/home/docker/cp-test_ha-003647_ha-003647-m03.txt
helpers_test.go:552: (dbg) Run:  out/minikube-linux-amd64 -p ha-003647 ssh -n ha-003647 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:552: (dbg) Run:  out/minikube-linux-amd64 -p ha-003647 ssh -n ha-003647-m03 "sudo cat /home/docker/cp-test_ha-003647_ha-003647-m03.txt"
helpers_test.go:574: (dbg) Run:  out/minikube-linux-amd64 -p ha-003647 cp ha-003647:/home/docker/cp-test.txt ha-003647-m04:/home/docker/cp-test_ha-003647_ha-003647-m04.txt
helpers_test.go:552: (dbg) Run:  out/minikube-linux-amd64 -p ha-003647 ssh -n ha-003647 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:552: (dbg) Run:  out/minikube-linux-amd64 -p ha-003647 ssh -n ha-003647-m04 "sudo cat /home/docker/cp-test_ha-003647_ha-003647-m04.txt"
helpers_test.go:574: (dbg) Run:  out/minikube-linux-amd64 -p ha-003647 cp testdata/cp-test.txt ha-003647-m02:/home/docker/cp-test.txt
helpers_test.go:552: (dbg) Run:  out/minikube-linux-amd64 -p ha-003647 ssh -n ha-003647-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:574: (dbg) Run:  out/minikube-linux-amd64 -p ha-003647 cp ha-003647-m02:/home/docker/cp-test.txt /tmp/TestMultiControlPlaneserialCopyFile4228736321/001/cp-test_ha-003647-m02.txt
helpers_test.go:552: (dbg) Run:  out/minikube-linux-amd64 -p ha-003647 ssh -n ha-003647-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:574: (dbg) Run:  out/minikube-linux-amd64 -p ha-003647 cp ha-003647-m02:/home/docker/cp-test.txt ha-003647:/home/docker/cp-test_ha-003647-m02_ha-003647.txt
helpers_test.go:552: (dbg) Run:  out/minikube-linux-amd64 -p ha-003647 ssh -n ha-003647-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:552: (dbg) Run:  out/minikube-linux-amd64 -p ha-003647 ssh -n ha-003647 "sudo cat /home/docker/cp-test_ha-003647-m02_ha-003647.txt"
helpers_test.go:574: (dbg) Run:  out/minikube-linux-amd64 -p ha-003647 cp ha-003647-m02:/home/docker/cp-test.txt ha-003647-m03:/home/docker/cp-test_ha-003647-m02_ha-003647-m03.txt
helpers_test.go:552: (dbg) Run:  out/minikube-linux-amd64 -p ha-003647 ssh -n ha-003647-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:552: (dbg) Run:  out/minikube-linux-amd64 -p ha-003647 ssh -n ha-003647-m03 "sudo cat /home/docker/cp-test_ha-003647-m02_ha-003647-m03.txt"
helpers_test.go:574: (dbg) Run:  out/minikube-linux-amd64 -p ha-003647 cp ha-003647-m02:/home/docker/cp-test.txt ha-003647-m04:/home/docker/cp-test_ha-003647-m02_ha-003647-m04.txt
helpers_test.go:552: (dbg) Run:  out/minikube-linux-amd64 -p ha-003647 ssh -n ha-003647-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:552: (dbg) Run:  out/minikube-linux-amd64 -p ha-003647 ssh -n ha-003647-m04 "sudo cat /home/docker/cp-test_ha-003647-m02_ha-003647-m04.txt"
helpers_test.go:574: (dbg) Run:  out/minikube-linux-amd64 -p ha-003647 cp testdata/cp-test.txt ha-003647-m03:/home/docker/cp-test.txt
helpers_test.go:552: (dbg) Run:  out/minikube-linux-amd64 -p ha-003647 ssh -n ha-003647-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:574: (dbg) Run:  out/minikube-linux-amd64 -p ha-003647 cp ha-003647-m03:/home/docker/cp-test.txt /tmp/TestMultiControlPlaneserialCopyFile4228736321/001/cp-test_ha-003647-m03.txt
helpers_test.go:552: (dbg) Run:  out/minikube-linux-amd64 -p ha-003647 ssh -n ha-003647-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:574: (dbg) Run:  out/minikube-linux-amd64 -p ha-003647 cp ha-003647-m03:/home/docker/cp-test.txt ha-003647:/home/docker/cp-test_ha-003647-m03_ha-003647.txt
helpers_test.go:552: (dbg) Run:  out/minikube-linux-amd64 -p ha-003647 ssh -n ha-003647-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:552: (dbg) Run:  out/minikube-linux-amd64 -p ha-003647 ssh -n ha-003647 "sudo cat /home/docker/cp-test_ha-003647-m03_ha-003647.txt"
helpers_test.go:574: (dbg) Run:  out/minikube-linux-amd64 -p ha-003647 cp ha-003647-m03:/home/docker/cp-test.txt ha-003647-m02:/home/docker/cp-test_ha-003647-m03_ha-003647-m02.txt
helpers_test.go:552: (dbg) Run:  out/minikube-linux-amd64 -p ha-003647 ssh -n ha-003647-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:552: (dbg) Run:  out/minikube-linux-amd64 -p ha-003647 ssh -n ha-003647-m02 "sudo cat /home/docker/cp-test_ha-003647-m03_ha-003647-m02.txt"
helpers_test.go:574: (dbg) Run:  out/minikube-linux-amd64 -p ha-003647 cp ha-003647-m03:/home/docker/cp-test.txt ha-003647-m04:/home/docker/cp-test_ha-003647-m03_ha-003647-m04.txt
helpers_test.go:552: (dbg) Run:  out/minikube-linux-amd64 -p ha-003647 ssh -n ha-003647-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:552: (dbg) Run:  out/minikube-linux-amd64 -p ha-003647 ssh -n ha-003647-m04 "sudo cat /home/docker/cp-test_ha-003647-m03_ha-003647-m04.txt"
helpers_test.go:574: (dbg) Run:  out/minikube-linux-amd64 -p ha-003647 cp testdata/cp-test.txt ha-003647-m04:/home/docker/cp-test.txt
helpers_test.go:552: (dbg) Run:  out/minikube-linux-amd64 -p ha-003647 ssh -n ha-003647-m04 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:574: (dbg) Run:  out/minikube-linux-amd64 -p ha-003647 cp ha-003647-m04:/home/docker/cp-test.txt /tmp/TestMultiControlPlaneserialCopyFile4228736321/001/cp-test_ha-003647-m04.txt
helpers_test.go:552: (dbg) Run:  out/minikube-linux-amd64 -p ha-003647 ssh -n ha-003647-m04 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:574: (dbg) Run:  out/minikube-linux-amd64 -p ha-003647 cp ha-003647-m04:/home/docker/cp-test.txt ha-003647:/home/docker/cp-test_ha-003647-m04_ha-003647.txt
helpers_test.go:552: (dbg) Run:  out/minikube-linux-amd64 -p ha-003647 ssh -n ha-003647-m04 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:552: (dbg) Run:  out/minikube-linux-amd64 -p ha-003647 ssh -n ha-003647 "sudo cat /home/docker/cp-test_ha-003647-m04_ha-003647.txt"
helpers_test.go:574: (dbg) Run:  out/minikube-linux-amd64 -p ha-003647 cp ha-003647-m04:/home/docker/cp-test.txt ha-003647-m02:/home/docker/cp-test_ha-003647-m04_ha-003647-m02.txt
helpers_test.go:552: (dbg) Run:  out/minikube-linux-amd64 -p ha-003647 ssh -n ha-003647-m04 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:552: (dbg) Run:  out/minikube-linux-amd64 -p ha-003647 ssh -n ha-003647-m02 "sudo cat /home/docker/cp-test_ha-003647-m04_ha-003647-m02.txt"
helpers_test.go:574: (dbg) Run:  out/minikube-linux-amd64 -p ha-003647 cp ha-003647-m04:/home/docker/cp-test.txt ha-003647-m03:/home/docker/cp-test_ha-003647-m04_ha-003647-m03.txt
helpers_test.go:552: (dbg) Run:  out/minikube-linux-amd64 -p ha-003647 ssh -n ha-003647-m04 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:552: (dbg) Run:  out/minikube-linux-amd64 -p ha-003647 ssh -n ha-003647-m03 "sudo cat /home/docker/cp-test_ha-003647-m04_ha-003647-m03.txt"
--- PASS: TestMultiControlPlane/serial/CopyFile (10.89s)

                                                
                                    
x
+
TestMultiControlPlane/serial/StopSecondaryNode (14.67s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/StopSecondaryNode
ha_test.go:365: (dbg) Run:  out/minikube-linux-amd64 -p ha-003647 node stop m02 --alsologtostderr -v 5
ha_test.go:365: (dbg) Done: out/minikube-linux-amd64 -p ha-003647 node stop m02 --alsologtostderr -v 5: (14.1744362s)
ha_test.go:371: (dbg) Run:  out/minikube-linux-amd64 -p ha-003647 status --alsologtostderr -v 5
E1220 01:36:34.991388   13018 cert_rotation.go:172] "Loading client cert failed" err="open /home/minitest/minikube-integration/7cd9f41b7421760cf1f1eaa8725bdb975037b06d-7160/.minikube/profiles/functional-281340/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
ha_test.go:371: (dbg) Non-zero exit: out/minikube-linux-amd64 -p ha-003647 status --alsologtostderr -v 5: exit status 7 (498.753084ms)

                                                
                                                
-- stdout --
	ha-003647
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Running
	kubeconfig: Configured
	
	ha-003647-m02
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	
	ha-003647-m03
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Running
	kubeconfig: Configured
	
	ha-003647-m04
	type: Worker
	host: Running
	kubelet: Running
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1220 01:36:34.636633   21649 out.go:360] Setting OutFile to fd 1 ...
	I1220 01:36:34.636910   21649 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1220 01:36:34.636920   21649 out.go:374] Setting ErrFile to fd 2...
	I1220 01:36:34.636925   21649 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1220 01:36:34.637277   21649 root.go:338] Updating PATH: /home/minitest/minikube-integration/7cd9f41b7421760cf1f1eaa8725bdb975037b06d-7160/.minikube/bin
	I1220 01:36:34.637501   21649 out.go:368] Setting JSON to false
	I1220 01:36:34.637533   21649 mustload.go:66] Loading cluster: ha-003647
	I1220 01:36:34.637645   21649 notify.go:221] Checking for updates...
	I1220 01:36:34.637981   21649 config.go:182] Loaded profile config "ha-003647": Driver=kvm2, ContainerRuntime=docker, KubernetesVersion=v1.34.3
	I1220 01:36:34.637996   21649 status.go:174] checking status of ha-003647 ...
	I1220 01:36:34.640414   21649 status.go:371] ha-003647 host status = "Running" (err=<nil>)
	I1220 01:36:34.640434   21649 host.go:66] Checking if "ha-003647" exists ...
	I1220 01:36:34.643426   21649 main.go:144] libmachine: domain ha-003647 has defined MAC address 52:54:00:49:ee:14 in network mk-ha-003647
	I1220 01:36:34.643870   21649 main.go:144] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:49:ee:14", ip: ""} in network mk-ha-003647: {Iface:virbr1 ExpiryTime:2025-12-20 02:31:25 +0000 UTC Type:0 Mac:52:54:00:49:ee:14 Iaid: IPaddr:192.168.39.247 Prefix:24 Hostname:ha-003647 Clientid:01:52:54:00:49:ee:14}
	I1220 01:36:34.643916   21649 main.go:144] libmachine: domain ha-003647 has defined IP address 192.168.39.247 and MAC address 52:54:00:49:ee:14 in network mk-ha-003647
	I1220 01:36:34.644067   21649 host.go:66] Checking if "ha-003647" exists ...
	I1220 01:36:34.644391   21649 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1220 01:36:34.646534   21649 main.go:144] libmachine: domain ha-003647 has defined MAC address 52:54:00:49:ee:14 in network mk-ha-003647
	I1220 01:36:34.646950   21649 main.go:144] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:49:ee:14", ip: ""} in network mk-ha-003647: {Iface:virbr1 ExpiryTime:2025-12-20 02:31:25 +0000 UTC Type:0 Mac:52:54:00:49:ee:14 Iaid: IPaddr:192.168.39.247 Prefix:24 Hostname:ha-003647 Clientid:01:52:54:00:49:ee:14}
	I1220 01:36:34.646981   21649 main.go:144] libmachine: domain ha-003647 has defined IP address 192.168.39.247 and MAC address 52:54:00:49:ee:14 in network mk-ha-003647
	I1220 01:36:34.647122   21649 sshutil.go:53] new ssh client: &{IP:192.168.39.247 Port:22 SSHKeyPath:/home/minitest/minikube-integration/7cd9f41b7421760cf1f1eaa8725bdb975037b06d-7160/.minikube/machines/ha-003647/id_rsa Username:docker}
	I1220 01:36:34.737998   21649 ssh_runner.go:195] Run: systemctl --version
	I1220 01:36:34.751507   21649 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1220 01:36:34.772457   21649 kubeconfig.go:125] found "ha-003647" server: "https://192.168.39.254:8443"
	I1220 01:36:34.772495   21649 api_server.go:166] Checking apiserver status ...
	I1220 01:36:34.772548   21649 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1220 01:36:34.792113   21649 ssh_runner.go:195] Run: sudo egrep ^[0-9]+:freezer: /proc/2520/cgroup
	W1220 01:36:34.803286   21649 api_server.go:177] unable to find freezer cgroup: sudo egrep ^[0-9]+:freezer: /proc/2520/cgroup: Process exited with status 1
	stdout:
	
	stderr:
	I1220 01:36:34.803345   21649 ssh_runner.go:195] Run: ls
	I1220 01:36:34.808134   21649 api_server.go:253] Checking apiserver healthz at https://192.168.39.254:8443/healthz ...
	I1220 01:36:34.813985   21649 api_server.go:279] https://192.168.39.254:8443/healthz returned 200:
	ok
	I1220 01:36:34.814010   21649 status.go:463] ha-003647 apiserver status = Running (err=<nil>)
	I1220 01:36:34.814022   21649 status.go:176] ha-003647 status: &{Name:ha-003647 Host:Running Kubelet:Running APIServer:Running Kubeconfig:Configured Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I1220 01:36:34.814041   21649 status.go:174] checking status of ha-003647-m02 ...
	I1220 01:36:34.815779   21649 status.go:371] ha-003647-m02 host status = "Stopped" (err=<nil>)
	I1220 01:36:34.815798   21649 status.go:384] host is not running, skipping remaining checks
	I1220 01:36:34.815804   21649 status.go:176] ha-003647-m02 status: &{Name:ha-003647-m02 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I1220 01:36:34.815820   21649 status.go:174] checking status of ha-003647-m03 ...
	I1220 01:36:34.817314   21649 status.go:371] ha-003647-m03 host status = "Running" (err=<nil>)
	I1220 01:36:34.817338   21649 host.go:66] Checking if "ha-003647-m03" exists ...
	I1220 01:36:34.819828   21649 main.go:144] libmachine: domain ha-003647-m03 has defined MAC address 52:54:00:05:af:8e in network mk-ha-003647
	I1220 01:36:34.820232   21649 main.go:144] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:05:af:8e", ip: ""} in network mk-ha-003647: {Iface:virbr1 ExpiryTime:2025-12-20 02:33:31 +0000 UTC Type:0 Mac:52:54:00:05:af:8e Iaid: IPaddr:192.168.39.180 Prefix:24 Hostname:ha-003647-m03 Clientid:01:52:54:00:05:af:8e}
	I1220 01:36:34.820275   21649 main.go:144] libmachine: domain ha-003647-m03 has defined IP address 192.168.39.180 and MAC address 52:54:00:05:af:8e in network mk-ha-003647
	I1220 01:36:34.820421   21649 host.go:66] Checking if "ha-003647-m03" exists ...
	I1220 01:36:34.820620   21649 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1220 01:36:34.822579   21649 main.go:144] libmachine: domain ha-003647-m03 has defined MAC address 52:54:00:05:af:8e in network mk-ha-003647
	I1220 01:36:34.822959   21649 main.go:144] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:05:af:8e", ip: ""} in network mk-ha-003647: {Iface:virbr1 ExpiryTime:2025-12-20 02:33:31 +0000 UTC Type:0 Mac:52:54:00:05:af:8e Iaid: IPaddr:192.168.39.180 Prefix:24 Hostname:ha-003647-m03 Clientid:01:52:54:00:05:af:8e}
	I1220 01:36:34.822977   21649 main.go:144] libmachine: domain ha-003647-m03 has defined IP address 192.168.39.180 and MAC address 52:54:00:05:af:8e in network mk-ha-003647
	I1220 01:36:34.823101   21649 sshutil.go:53] new ssh client: &{IP:192.168.39.180 Port:22 SSHKeyPath:/home/minitest/minikube-integration/7cd9f41b7421760cf1f1eaa8725bdb975037b06d-7160/.minikube/machines/ha-003647-m03/id_rsa Username:docker}
	I1220 01:36:34.905574   21649 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1220 01:36:34.922461   21649 kubeconfig.go:125] found "ha-003647" server: "https://192.168.39.254:8443"
	I1220 01:36:34.922493   21649 api_server.go:166] Checking apiserver status ...
	I1220 01:36:34.922538   21649 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1220 01:36:34.943020   21649 ssh_runner.go:195] Run: sudo egrep ^[0-9]+:freezer: /proc/2441/cgroup
	W1220 01:36:34.956236   21649 api_server.go:177] unable to find freezer cgroup: sudo egrep ^[0-9]+:freezer: /proc/2441/cgroup: Process exited with status 1
	stdout:
	
	stderr:
	I1220 01:36:34.956306   21649 ssh_runner.go:195] Run: ls
	I1220 01:36:34.961714   21649 api_server.go:253] Checking apiserver healthz at https://192.168.39.254:8443/healthz ...
	I1220 01:36:34.966933   21649 api_server.go:279] https://192.168.39.254:8443/healthz returned 200:
	ok
	I1220 01:36:34.966956   21649 status.go:463] ha-003647-m03 apiserver status = Running (err=<nil>)
	I1220 01:36:34.966964   21649 status.go:176] ha-003647-m03 status: &{Name:ha-003647-m03 Host:Running Kubelet:Running APIServer:Running Kubeconfig:Configured Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I1220 01:36:34.966977   21649 status.go:174] checking status of ha-003647-m04 ...
	I1220 01:36:34.968554   21649 status.go:371] ha-003647-m04 host status = "Running" (err=<nil>)
	I1220 01:36:34.968570   21649 host.go:66] Checking if "ha-003647-m04" exists ...
	I1220 01:36:34.970852   21649 main.go:144] libmachine: domain ha-003647-m04 has defined MAC address 52:54:00:45:4d:65 in network mk-ha-003647
	I1220 01:36:34.971240   21649 main.go:144] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:45:4d:65", ip: ""} in network mk-ha-003647: {Iface:virbr1 ExpiryTime:2025-12-20 02:35:34 +0000 UTC Type:0 Mac:52:54:00:45:4d:65 Iaid: IPaddr:192.168.39.209 Prefix:24 Hostname:ha-003647-m04 Clientid:01:52:54:00:45:4d:65}
	I1220 01:36:34.971258   21649 main.go:144] libmachine: domain ha-003647-m04 has defined IP address 192.168.39.209 and MAC address 52:54:00:45:4d:65 in network mk-ha-003647
	I1220 01:36:34.971377   21649 host.go:66] Checking if "ha-003647-m04" exists ...
	I1220 01:36:34.971615   21649 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1220 01:36:34.973546   21649 main.go:144] libmachine: domain ha-003647-m04 has defined MAC address 52:54:00:45:4d:65 in network mk-ha-003647
	I1220 01:36:34.973887   21649 main.go:144] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:45:4d:65", ip: ""} in network mk-ha-003647: {Iface:virbr1 ExpiryTime:2025-12-20 02:35:34 +0000 UTC Type:0 Mac:52:54:00:45:4d:65 Iaid: IPaddr:192.168.39.209 Prefix:24 Hostname:ha-003647-m04 Clientid:01:52:54:00:45:4d:65}
	I1220 01:36:34.973904   21649 main.go:144] libmachine: domain ha-003647-m04 has defined IP address 192.168.39.209 and MAC address 52:54:00:45:4d:65 in network mk-ha-003647
	I1220 01:36:34.974019   21649 sshutil.go:53] new ssh client: &{IP:192.168.39.209 Port:22 SSHKeyPath:/home/minitest/minikube-integration/7cd9f41b7421760cf1f1eaa8725bdb975037b06d-7160/.minikube/machines/ha-003647-m04/id_rsa Username:docker}
	I1220 01:36:35.058329   21649 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1220 01:36:35.075368   21649 status.go:176] ha-003647-m04 status: &{Name:ha-003647-m04 Host:Running Kubelet:Running APIServer:Irrelevant Kubeconfig:Irrelevant Worker:true TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
--- PASS: TestMultiControlPlane/serial/StopSecondaryNode (14.67s)

                                                
                                    
x
+
TestMultiControlPlane/serial/DegradedAfterControlPlaneNodeStop (0.51s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/DegradedAfterControlPlaneNodeStop
ha_test.go:392: (dbg) Run:  out/minikube-linux-amd64 profile list --output json
--- PASS: TestMultiControlPlane/serial/DegradedAfterControlPlaneNodeStop (0.51s)

                                                
                                    
x
+
TestMultiControlPlane/serial/RestartSecondaryNode (21.83s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/RestartSecondaryNode
ha_test.go:422: (dbg) Run:  out/minikube-linux-amd64 -p ha-003647 node start m02 --alsologtostderr -v 5
E1220 01:36:40.825339   13018 cert_rotation.go:172] "Loading client cert failed" err="open /home/minitest/minikube-integration/7cd9f41b7421760cf1f1eaa8725bdb975037b06d-7160/.minikube/profiles/functional-562123/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
ha_test.go:422: (dbg) Done: out/minikube-linux-amd64 -p ha-003647 node start m02 --alsologtostderr -v 5: (20.980964815s)
ha_test.go:430: (dbg) Run:  out/minikube-linux-amd64 -p ha-003647 status --alsologtostderr -v 5
ha_test.go:450: (dbg) Run:  kubectl get nodes
--- PASS: TestMultiControlPlane/serial/RestartSecondaryNode (21.83s)

                                                
                                    
x
+
TestMultiControlPlane/serial/HAppyAfterSecondaryNodeRestart (0.78s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/HAppyAfterSecondaryNodeRestart
ha_test.go:281: (dbg) Run:  out/minikube-linux-amd64 profile list --output json
--- PASS: TestMultiControlPlane/serial/HAppyAfterSecondaryNodeRestart (0.78s)

                                                
                                    
x
+
TestMultiControlPlane/serial/RestartClusterKeepsNodes (164.5s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/RestartClusterKeepsNodes
ha_test.go:458: (dbg) Run:  out/minikube-linux-amd64 -p ha-003647 node list --alsologtostderr -v 5
ha_test.go:464: (dbg) Run:  out/minikube-linux-amd64 -p ha-003647 stop --alsologtostderr -v 5
E1220 01:37:02.676456   13018 cert_rotation.go:172] "Loading client cert failed" err="open /home/minitest/minikube-integration/7cd9f41b7421760cf1f1eaa8725bdb975037b06d-7160/.minikube/profiles/functional-281340/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
ha_test.go:464: (dbg) Done: out/minikube-linux-amd64 -p ha-003647 stop --alsologtostderr -v 5: (42.967495347s)
ha_test.go:469: (dbg) Run:  out/minikube-linux-amd64 -p ha-003647 start --wait true --alsologtostderr -v 5
E1220 01:38:02.745772   13018 cert_rotation.go:172] "Loading client cert failed" err="open /home/minitest/minikube-integration/7cd9f41b7421760cf1f1eaa8725bdb975037b06d-7160/.minikube/profiles/functional-562123/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
ha_test.go:469: (dbg) Done: out/minikube-linux-amd64 -p ha-003647 start --wait true --alsologtostderr -v 5: (2m1.397987443s)
ha_test.go:474: (dbg) Run:  out/minikube-linux-amd64 -p ha-003647 node list --alsologtostderr -v 5
--- PASS: TestMultiControlPlane/serial/RestartClusterKeepsNodes (164.50s)

                                                
                                    
x
+
TestMultiControlPlane/serial/DeleteSecondaryNode (6.87s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/DeleteSecondaryNode
ha_test.go:489: (dbg) Run:  out/minikube-linux-amd64 -p ha-003647 node delete m03 --alsologtostderr -v 5
ha_test.go:489: (dbg) Done: out/minikube-linux-amd64 -p ha-003647 node delete m03 --alsologtostderr -v 5: (6.244074716s)
ha_test.go:495: (dbg) Run:  out/minikube-linux-amd64 -p ha-003647 status --alsologtostderr -v 5
ha_test.go:513: (dbg) Run:  kubectl get nodes
ha_test.go:521: (dbg) Run:  kubectl get nodes -o "go-template='{{range .items}}{{range .status.conditions}}{{if eq .type "Ready"}} {{.status}}{{"\n"}}{{end}}{{end}}{{end}}'"
--- PASS: TestMultiControlPlane/serial/DeleteSecondaryNode (6.87s)

                                                
                                    
x
+
TestMultiControlPlane/serial/DegradedAfterSecondaryNodeDelete (0.51s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/DegradedAfterSecondaryNodeDelete
ha_test.go:392: (dbg) Run:  out/minikube-linux-amd64 profile list --output json
--- PASS: TestMultiControlPlane/serial/DegradedAfterSecondaryNodeDelete (0.51s)

                                                
                                    
x
+
TestMultiControlPlane/serial/StopCluster (38.07s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/StopCluster
ha_test.go:533: (dbg) Run:  out/minikube-linux-amd64 -p ha-003647 stop --alsologtostderr -v 5
E1220 01:39:57.552823   13018 cert_rotation.go:172] "Loading client cert failed" err="open /home/minitest/minikube-integration/7cd9f41b7421760cf1f1eaa8725bdb975037b06d-7160/.minikube/profiles/addons-616728/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1220 01:40:18.901328   13018 cert_rotation.go:172] "Loading client cert failed" err="open /home/minitest/minikube-integration/7cd9f41b7421760cf1f1eaa8725bdb975037b06d-7160/.minikube/profiles/functional-562123/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
ha_test.go:533: (dbg) Done: out/minikube-linux-amd64 -p ha-003647 stop --alsologtostderr -v 5: (38.008703315s)
ha_test.go:539: (dbg) Run:  out/minikube-linux-amd64 -p ha-003647 status --alsologtostderr -v 5
ha_test.go:539: (dbg) Non-zero exit: out/minikube-linux-amd64 -p ha-003647 status --alsologtostderr -v 5: exit status 7 (61.973875ms)

                                                
                                                
-- stdout --
	ha-003647
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	
	ha-003647-m02
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	
	ha-003647-m04
	type: Worker
	host: Stopped
	kubelet: Stopped
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1220 01:40:28.142113   22468 out.go:360] Setting OutFile to fd 1 ...
	I1220 01:40:28.142560   22468 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1220 01:40:28.142571   22468 out.go:374] Setting ErrFile to fd 2...
	I1220 01:40:28.142575   22468 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1220 01:40:28.142746   22468 root.go:338] Updating PATH: /home/minitest/minikube-integration/7cd9f41b7421760cf1f1eaa8725bdb975037b06d-7160/.minikube/bin
	I1220 01:40:28.142908   22468 out.go:368] Setting JSON to false
	I1220 01:40:28.142934   22468 mustload.go:66] Loading cluster: ha-003647
	I1220 01:40:28.143053   22468 notify.go:221] Checking for updates...
	I1220 01:40:28.143321   22468 config.go:182] Loaded profile config "ha-003647": Driver=kvm2, ContainerRuntime=docker, KubernetesVersion=v1.34.3
	I1220 01:40:28.143334   22468 status.go:174] checking status of ha-003647 ...
	I1220 01:40:28.145948   22468 status.go:371] ha-003647 host status = "Stopped" (err=<nil>)
	I1220 01:40:28.145964   22468 status.go:384] host is not running, skipping remaining checks
	I1220 01:40:28.145968   22468 status.go:176] ha-003647 status: &{Name:ha-003647 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I1220 01:40:28.145983   22468 status.go:174] checking status of ha-003647-m02 ...
	I1220 01:40:28.147330   22468 status.go:371] ha-003647-m02 host status = "Stopped" (err=<nil>)
	I1220 01:40:28.147346   22468 status.go:384] host is not running, skipping remaining checks
	I1220 01:40:28.147352   22468 status.go:176] ha-003647-m02 status: &{Name:ha-003647-m02 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I1220 01:40:28.147366   22468 status.go:174] checking status of ha-003647-m04 ...
	I1220 01:40:28.148709   22468 status.go:371] ha-003647-m04 host status = "Stopped" (err=<nil>)
	I1220 01:40:28.148723   22468 status.go:384] host is not running, skipping remaining checks
	I1220 01:40:28.148729   22468 status.go:176] ha-003647-m04 status: &{Name:ha-003647-m04 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:true TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
--- PASS: TestMultiControlPlane/serial/StopCluster (38.07s)

                                                
                                    
x
+
TestMultiControlPlane/serial/RestartCluster (86.79s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/RestartCluster
ha_test.go:562: (dbg) Run:  out/minikube-linux-amd64 -p ha-003647 start --wait true --alsologtostderr -v 5 --driver=kvm2  --container-runtime=docker
E1220 01:40:46.586187   13018 cert_rotation.go:172] "Loading client cert failed" err="open /home/minitest/minikube-integration/7cd9f41b7421760cf1f1eaa8725bdb975037b06d-7160/.minikube/profiles/functional-562123/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1220 01:41:20.602974   13018 cert_rotation.go:172] "Loading client cert failed" err="open /home/minitest/minikube-integration/7cd9f41b7421760cf1f1eaa8725bdb975037b06d-7160/.minikube/profiles/addons-616728/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1220 01:41:34.991346   13018 cert_rotation.go:172] "Loading client cert failed" err="open /home/minitest/minikube-integration/7cd9f41b7421760cf1f1eaa8725bdb975037b06d-7160/.minikube/profiles/functional-281340/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
ha_test.go:562: (dbg) Done: out/minikube-linux-amd64 -p ha-003647 start --wait true --alsologtostderr -v 5 --driver=kvm2  --container-runtime=docker: (1m26.134321643s)
ha_test.go:568: (dbg) Run:  out/minikube-linux-amd64 -p ha-003647 status --alsologtostderr -v 5
ha_test.go:586: (dbg) Run:  kubectl get nodes
ha_test.go:594: (dbg) Run:  kubectl get nodes -o "go-template='{{range .items}}{{range .status.conditions}}{{if eq .type "Ready"}} {{.status}}{{"\n"}}{{end}}{{end}}{{end}}'"
--- PASS: TestMultiControlPlane/serial/RestartCluster (86.79s)

                                                
                                    
x
+
TestMultiControlPlane/serial/DegradedAfterClusterRestart (0.51s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/DegradedAfterClusterRestart
ha_test.go:392: (dbg) Run:  out/minikube-linux-amd64 profile list --output json
--- PASS: TestMultiControlPlane/serial/DegradedAfterClusterRestart (0.51s)

                                                
                                    
x
+
TestMultiControlPlane/serial/AddSecondaryNode (76.54s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/AddSecondaryNode
ha_test.go:607: (dbg) Run:  out/minikube-linux-amd64 -p ha-003647 node add --control-plane --alsologtostderr -v 5
ha_test.go:607: (dbg) Done: out/minikube-linux-amd64 -p ha-003647 node add --control-plane --alsologtostderr -v 5: (1m15.871920077s)
ha_test.go:613: (dbg) Run:  out/minikube-linux-amd64 -p ha-003647 status --alsologtostderr -v 5
--- PASS: TestMultiControlPlane/serial/AddSecondaryNode (76.54s)

                                                
                                    
x
+
TestMultiControlPlane/serial/HAppyAfterSecondaryNodeAdd (0.68s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/HAppyAfterSecondaryNodeAdd
ha_test.go:281: (dbg) Run:  out/minikube-linux-amd64 profile list --output json
--- PASS: TestMultiControlPlane/serial/HAppyAfterSecondaryNodeAdd (0.68s)

                                                
                                    
x
+
TestImageBuild/serial/Setup (38.24s)

                                                
                                                
=== RUN   TestImageBuild/serial/Setup
image_test.go:69: (dbg) Run:  out/minikube-linux-amd64 start -p image-536943 --driver=kvm2  --container-runtime=docker
image_test.go:69: (dbg) Done: out/minikube-linux-amd64 start -p image-536943 --driver=kvm2  --container-runtime=docker: (38.236825275s)
--- PASS: TestImageBuild/serial/Setup (38.24s)

                                                
                                    
x
+
TestImageBuild/serial/NormalBuild (1.43s)

                                                
                                                
=== RUN   TestImageBuild/serial/NormalBuild
image_test.go:78: (dbg) Run:  out/minikube-linux-amd64 image build -t aaa:latest ./testdata/image-build/test-normal -p image-536943
image_test.go:78: (dbg) Done: out/minikube-linux-amd64 image build -t aaa:latest ./testdata/image-build/test-normal -p image-536943: (1.431544099s)
--- PASS: TestImageBuild/serial/NormalBuild (1.43s)

                                                
                                    
x
+
TestImageBuild/serial/BuildWithBuildArg (0.83s)

                                                
                                                
=== RUN   TestImageBuild/serial/BuildWithBuildArg
image_test.go:99: (dbg) Run:  out/minikube-linux-amd64 image build -t aaa:latest --build-opt=build-arg=ENV_A=test_env_str --build-opt=no-cache ./testdata/image-build/test-arg -p image-536943
--- PASS: TestImageBuild/serial/BuildWithBuildArg (0.83s)

                                                
                                    
x
+
TestImageBuild/serial/BuildWithDockerIgnore (0.6s)

                                                
                                                
=== RUN   TestImageBuild/serial/BuildWithDockerIgnore
image_test.go:133: (dbg) Run:  out/minikube-linux-amd64 image build -t aaa:latest ./testdata/image-build/test-normal --build-opt=no-cache -p image-536943
--- PASS: TestImageBuild/serial/BuildWithDockerIgnore (0.60s)

                                                
                                    
x
+
TestImageBuild/serial/BuildWithSpecifiedDockerfile (0.66s)

                                                
                                                
=== RUN   TestImageBuild/serial/BuildWithSpecifiedDockerfile
image_test.go:88: (dbg) Run:  out/minikube-linux-amd64 image build -t aaa:latest -f inner/Dockerfile ./testdata/image-build/test-f -p image-536943
--- PASS: TestImageBuild/serial/BuildWithSpecifiedDockerfile (0.66s)

                                                
                                    
x
+
TestJSONOutput/start/Command (80.81s)

                                                
                                                
=== RUN   TestJSONOutput/start/Command
json_output_test.go:63: (dbg) Run:  out/minikube-linux-amd64 start -p json-output-951289 --output=json --user=testUser --memory=3072 --wait=true --driver=kvm2  --container-runtime=docker
E1220 01:44:57.551848   13018 cert_rotation.go:172] "Loading client cert failed" err="open /home/minitest/minikube-integration/7cd9f41b7421760cf1f1eaa8725bdb975037b06d-7160/.minikube/profiles/addons-616728/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
json_output_test.go:63: (dbg) Done: out/minikube-linux-amd64 start -p json-output-951289 --output=json --user=testUser --memory=3072 --wait=true --driver=kvm2  --container-runtime=docker: (1m20.808347129s)
--- PASS: TestJSONOutput/start/Command (80.81s)

                                                
                                    
x
+
TestJSONOutput/start/Audit (0s)

                                                
                                                
=== RUN   TestJSONOutput/start/Audit
--- PASS: TestJSONOutput/start/Audit (0.00s)

                                                
                                    
x
+
TestJSONOutput/start/parallel/DistinctCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/start/parallel/DistinctCurrentSteps
=== PAUSE TestJSONOutput/start/parallel/DistinctCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/start/parallel/DistinctCurrentSteps
--- PASS: TestJSONOutput/start/parallel/DistinctCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/start/parallel/IncreasingCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/start/parallel/IncreasingCurrentSteps
=== PAUSE TestJSONOutput/start/parallel/IncreasingCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/start/parallel/IncreasingCurrentSteps
--- PASS: TestJSONOutput/start/parallel/IncreasingCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/pause/Command (0.6s)

                                                
                                                
=== RUN   TestJSONOutput/pause/Command
json_output_test.go:63: (dbg) Run:  out/minikube-linux-amd64 pause -p json-output-951289 --output=json --user=testUser
E1220 01:45:18.900297   13018 cert_rotation.go:172] "Loading client cert failed" err="open /home/minitest/minikube-integration/7cd9f41b7421760cf1f1eaa8725bdb975037b06d-7160/.minikube/profiles/functional-562123/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
--- PASS: TestJSONOutput/pause/Command (0.60s)

                                                
                                    
x
+
TestJSONOutput/pause/Audit (0s)

                                                
                                                
=== RUN   TestJSONOutput/pause/Audit
--- PASS: TestJSONOutput/pause/Audit (0.00s)

                                                
                                    
x
+
TestJSONOutput/pause/parallel/DistinctCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/pause/parallel/DistinctCurrentSteps
=== PAUSE TestJSONOutput/pause/parallel/DistinctCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/pause/parallel/DistinctCurrentSteps
--- PASS: TestJSONOutput/pause/parallel/DistinctCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/pause/parallel/IncreasingCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/pause/parallel/IncreasingCurrentSteps
=== PAUSE TestJSONOutput/pause/parallel/IncreasingCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/pause/parallel/IncreasingCurrentSteps
--- PASS: TestJSONOutput/pause/parallel/IncreasingCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/unpause/Command (0.55s)

                                                
                                                
=== RUN   TestJSONOutput/unpause/Command
json_output_test.go:63: (dbg) Run:  out/minikube-linux-amd64 unpause -p json-output-951289 --output=json --user=testUser
--- PASS: TestJSONOutput/unpause/Command (0.55s)

                                                
                                    
x
+
TestJSONOutput/unpause/Audit (0s)

                                                
                                                
=== RUN   TestJSONOutput/unpause/Audit
--- PASS: TestJSONOutput/unpause/Audit (0.00s)

                                                
                                    
x
+
TestJSONOutput/unpause/parallel/DistinctCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/unpause/parallel/DistinctCurrentSteps
=== PAUSE TestJSONOutput/unpause/parallel/DistinctCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/unpause/parallel/DistinctCurrentSteps
--- PASS: TestJSONOutput/unpause/parallel/DistinctCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/unpause/parallel/IncreasingCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/unpause/parallel/IncreasingCurrentSteps
=== PAUSE TestJSONOutput/unpause/parallel/IncreasingCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/unpause/parallel/IncreasingCurrentSteps
--- PASS: TestJSONOutput/unpause/parallel/IncreasingCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/stop/Command (13.64s)

                                                
                                                
=== RUN   TestJSONOutput/stop/Command
json_output_test.go:63: (dbg) Run:  out/minikube-linux-amd64 stop -p json-output-951289 --output=json --user=testUser
json_output_test.go:63: (dbg) Done: out/minikube-linux-amd64 stop -p json-output-951289 --output=json --user=testUser: (13.644445959s)
--- PASS: TestJSONOutput/stop/Command (13.64s)

                                                
                                    
x
+
TestJSONOutput/stop/Audit (0s)

                                                
                                                
=== RUN   TestJSONOutput/stop/Audit
--- PASS: TestJSONOutput/stop/Audit (0.00s)

                                                
                                    
x
+
TestJSONOutput/stop/parallel/DistinctCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/stop/parallel/DistinctCurrentSteps
=== PAUSE TestJSONOutput/stop/parallel/DistinctCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/stop/parallel/DistinctCurrentSteps
--- PASS: TestJSONOutput/stop/parallel/DistinctCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/stop/parallel/IncreasingCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/stop/parallel/IncreasingCurrentSteps
=== PAUSE TestJSONOutput/stop/parallel/IncreasingCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/stop/parallel/IncreasingCurrentSteps
--- PASS: TestJSONOutput/stop/parallel/IncreasingCurrentSteps (0.00s)

                                                
                                    
x
+
TestErrorJSONOutput (0.24s)

                                                
                                                
=== RUN   TestErrorJSONOutput
json_output_test.go:160: (dbg) Run:  out/minikube-linux-amd64 start -p json-output-error-108288 --memory=3072 --output=json --wait=true --driver=fail
json_output_test.go:160: (dbg) Non-zero exit: out/minikube-linux-amd64 start -p json-output-error-108288 --memory=3072 --output=json --wait=true --driver=fail: exit status 56 (80.413236ms)

                                                
                                                
-- stdout --
	{"specversion":"1.0","id":"f74342cb-5281-47f1-9d9c-0d241e7bffd2","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"0","message":"[json-output-error-108288] minikube v1.37.0 on Ubuntu 24.04 (kvm/amd64)","name":"Initial Minikube Setup","totalsteps":"19"}}
	{"specversion":"1.0","id":"e6bec358-b5fe-4c42-843f-97b068aeeac6","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true"}}
	{"specversion":"1.0","id":"355ad7bb-591b-4306-adc6-57d117a0fa39","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"KUBECONFIG=/home/minitest/minikube-integration/7cd9f41b7421760cf1f1eaa8725bdb975037b06d-7160/kubeconfig"}}
	{"specversion":"1.0","id":"450aa239-e018-49e2-b5af-49b32fc7297b","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_HOME=/home/minitest/minikube-integration/7cd9f41b7421760cf1f1eaa8725bdb975037b06d-7160/.minikube"}}
	{"specversion":"1.0","id":"3d88f546-c691-4e15-bd34-cd49cf6ea845","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_BIN=out/minikube-linux-amd64"}}
	{"specversion":"1.0","id":"b9b988f6-3466-4c1f-8c7a-fe4114f30e06","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_FORCE_SYSTEMD="}}
	{"specversion":"1.0","id":"558c14d6-23fb-40e1-ba2e-8001531c65e3","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.error","datacontenttype":"application/json","data":{"advice":"","exitcode":"56","issues":"","message":"The driver 'fail' is not supported on linux/amd64","name":"DRV_UNSUPPORTED_OS","url":""}}

                                                
                                                
-- /stdout --
helpers_test.go:176: Cleaning up "json-output-error-108288" profile ...
helpers_test.go:179: (dbg) Run:  out/minikube-linux-amd64 delete -p json-output-error-108288
--- PASS: TestErrorJSONOutput (0.24s)

                                                
                                    
x
+
TestMainNoArgs (0.06s)

                                                
                                                
=== RUN   TestMainNoArgs
main_test.go:70: (dbg) Run:  out/minikube-linux-amd64
--- PASS: TestMainNoArgs (0.06s)

                                                
                                    
x
+
TestMinikubeProfile (82.81s)

                                                
                                                
=== RUN   TestMinikubeProfile
minikube_profile_test.go:44: (dbg) Run:  out/minikube-linux-amd64 start -p first-421446 --driver=kvm2  --container-runtime=docker
minikube_profile_test.go:44: (dbg) Done: out/minikube-linux-amd64 start -p first-421446 --driver=kvm2  --container-runtime=docker: (40.312948553s)
minikube_profile_test.go:44: (dbg) Run:  out/minikube-linux-amd64 start -p second-423570 --driver=kvm2  --container-runtime=docker
E1220 01:46:34.996311   13018 cert_rotation.go:172] "Loading client cert failed" err="open /home/minitest/minikube-integration/7cd9f41b7421760cf1f1eaa8725bdb975037b06d-7160/.minikube/profiles/functional-281340/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
minikube_profile_test.go:44: (dbg) Done: out/minikube-linux-amd64 start -p second-423570 --driver=kvm2  --container-runtime=docker: (40.066141333s)
minikube_profile_test.go:51: (dbg) Run:  out/minikube-linux-amd64 profile first-421446
minikube_profile_test.go:55: (dbg) Run:  out/minikube-linux-amd64 profile list -ojson
minikube_profile_test.go:51: (dbg) Run:  out/minikube-linux-amd64 profile second-423570
minikube_profile_test.go:55: (dbg) Run:  out/minikube-linux-amd64 profile list -ojson
helpers_test.go:176: Cleaning up "second-423570" profile ...
helpers_test.go:179: (dbg) Run:  out/minikube-linux-amd64 delete -p second-423570
helpers_test.go:176: Cleaning up "first-421446" profile ...
helpers_test.go:179: (dbg) Run:  out/minikube-linux-amd64 delete -p first-421446
--- PASS: TestMinikubeProfile (82.81s)

                                                
                                    
x
+
TestMountStart/serial/StartWithMountFirst (22.82s)

                                                
                                                
=== RUN   TestMountStart/serial/StartWithMountFirst
mount_start_test.go:118: (dbg) Run:  out/minikube-linux-amd64 start -p mount-start-1-522646 --memory=3072 --mount-string /tmp/TestMountStartserial2326871860/001:/minikube-host --mount-gid 0 --mount-msize 6543 --mount-port 46464 --mount-uid 0 --no-kubernetes --driver=kvm2  --container-runtime=docker
mount_start_test.go:118: (dbg) Done: out/minikube-linux-amd64 start -p mount-start-1-522646 --memory=3072 --mount-string /tmp/TestMountStartserial2326871860/001:/minikube-host --mount-gid 0 --mount-msize 6543 --mount-port 46464 --mount-uid 0 --no-kubernetes --driver=kvm2  --container-runtime=docker: (21.817288804s)
--- PASS: TestMountStart/serial/StartWithMountFirst (22.82s)

                                                
                                    
x
+
TestMountStart/serial/VerifyMountFirst (0.31s)

                                                
                                                
=== RUN   TestMountStart/serial/VerifyMountFirst
mount_start_test.go:134: (dbg) Run:  out/minikube-linux-amd64 -p mount-start-1-522646 ssh -- ls /minikube-host
mount_start_test.go:147: (dbg) Run:  out/minikube-linux-amd64 -p mount-start-1-522646 ssh -- findmnt --json /minikube-host
--- PASS: TestMountStart/serial/VerifyMountFirst (0.31s)

                                                
                                    
x
+
TestMountStart/serial/StartWithMountSecond (22.32s)

                                                
                                                
=== RUN   TestMountStart/serial/StartWithMountSecond
mount_start_test.go:118: (dbg) Run:  out/minikube-linux-amd64 start -p mount-start-2-536868 --memory=3072 --mount-string /tmp/TestMountStartserial2326871860/001:/minikube-host --mount-gid 0 --mount-msize 6543 --mount-port 46465 --mount-uid 0 --no-kubernetes --driver=kvm2  --container-runtime=docker
mount_start_test.go:118: (dbg) Done: out/minikube-linux-amd64 start -p mount-start-2-536868 --memory=3072 --mount-string /tmp/TestMountStartserial2326871860/001:/minikube-host --mount-gid 0 --mount-msize 6543 --mount-port 46465 --mount-uid 0 --no-kubernetes --driver=kvm2  --container-runtime=docker: (21.323606093s)
--- PASS: TestMountStart/serial/StartWithMountSecond (22.32s)

                                                
                                    
x
+
TestMountStart/serial/VerifyMountSecond (0.31s)

                                                
                                                
=== RUN   TestMountStart/serial/VerifyMountSecond
mount_start_test.go:134: (dbg) Run:  out/minikube-linux-amd64 -p mount-start-2-536868 ssh -- ls /minikube-host
mount_start_test.go:147: (dbg) Run:  out/minikube-linux-amd64 -p mount-start-2-536868 ssh -- findmnt --json /minikube-host
--- PASS: TestMountStart/serial/VerifyMountSecond (0.31s)

                                                
                                    
x
+
TestMountStart/serial/DeleteFirst (0.73s)

                                                
                                                
=== RUN   TestMountStart/serial/DeleteFirst
pause_test.go:132: (dbg) Run:  out/minikube-linux-amd64 delete -p mount-start-1-522646 --alsologtostderr -v=5
--- PASS: TestMountStart/serial/DeleteFirst (0.73s)

                                                
                                    
x
+
TestMountStart/serial/VerifyMountPostDelete (0.31s)

                                                
                                                
=== RUN   TestMountStart/serial/VerifyMountPostDelete
mount_start_test.go:134: (dbg) Run:  out/minikube-linux-amd64 -p mount-start-2-536868 ssh -- ls /minikube-host
mount_start_test.go:147: (dbg) Run:  out/minikube-linux-amd64 -p mount-start-2-536868 ssh -- findmnt --json /minikube-host
--- PASS: TestMountStart/serial/VerifyMountPostDelete (0.31s)

                                                
                                    
x
+
TestMountStart/serial/Stop (1.23s)

                                                
                                                
=== RUN   TestMountStart/serial/Stop
mount_start_test.go:196: (dbg) Run:  out/minikube-linux-amd64 stop -p mount-start-2-536868
mount_start_test.go:196: (dbg) Done: out/minikube-linux-amd64 stop -p mount-start-2-536868: (1.23313023s)
--- PASS: TestMountStart/serial/Stop (1.23s)

                                                
                                    
x
+
TestMountStart/serial/RestartStopped (18.93s)

                                                
                                                
=== RUN   TestMountStart/serial/RestartStopped
mount_start_test.go:207: (dbg) Run:  out/minikube-linux-amd64 start -p mount-start-2-536868
E1220 01:47:58.036987   13018 cert_rotation.go:172] "Loading client cert failed" err="open /home/minitest/minikube-integration/7cd9f41b7421760cf1f1eaa8725bdb975037b06d-7160/.minikube/profiles/functional-281340/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
mount_start_test.go:207: (dbg) Done: out/minikube-linux-amd64 start -p mount-start-2-536868: (17.927588414s)
--- PASS: TestMountStart/serial/RestartStopped (18.93s)

                                                
                                    
x
+
TestMountStart/serial/VerifyMountPostStop (0.3s)

                                                
                                                
=== RUN   TestMountStart/serial/VerifyMountPostStop
mount_start_test.go:134: (dbg) Run:  out/minikube-linux-amd64 -p mount-start-2-536868 ssh -- ls /minikube-host
mount_start_test.go:147: (dbg) Run:  out/minikube-linux-amd64 -p mount-start-2-536868 ssh -- findmnt --json /minikube-host
--- PASS: TestMountStart/serial/VerifyMountPostStop (0.30s)

                                                
                                    
x
+
TestMultiNode/serial/FreshStart2Nodes (136.17s)

                                                
                                                
=== RUN   TestMultiNode/serial/FreshStart2Nodes
multinode_test.go:96: (dbg) Run:  out/minikube-linux-amd64 start -p multinode-129632 --wait=true --memory=3072 --nodes=2 -v=5 --alsologtostderr --driver=kvm2  --container-runtime=docker
E1220 01:49:57.553583   13018 cert_rotation.go:172] "Loading client cert failed" err="open /home/minitest/minikube-integration/7cd9f41b7421760cf1f1eaa8725bdb975037b06d-7160/.minikube/profiles/addons-616728/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1220 01:50:18.901303   13018 cert_rotation.go:172] "Loading client cert failed" err="open /home/minitest/minikube-integration/7cd9f41b7421760cf1f1eaa8725bdb975037b06d-7160/.minikube/profiles/functional-562123/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
multinode_test.go:96: (dbg) Done: out/minikube-linux-amd64 start -p multinode-129632 --wait=true --memory=3072 --nodes=2 -v=5 --alsologtostderr --driver=kvm2  --container-runtime=docker: (2m15.847410646s)
multinode_test.go:102: (dbg) Run:  out/minikube-linux-amd64 -p multinode-129632 status --alsologtostderr
--- PASS: TestMultiNode/serial/FreshStart2Nodes (136.17s)

                                                
                                    
x
+
TestMultiNode/serial/DeployApp2Nodes (4.06s)

                                                
                                                
=== RUN   TestMultiNode/serial/DeployApp2Nodes
multinode_test.go:493: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-129632 -- apply -f ./testdata/multinodes/multinode-pod-dns-test.yaml
multinode_test.go:498: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-129632 -- rollout status deployment/busybox
multinode_test.go:498: (dbg) Done: out/minikube-linux-amd64 kubectl -p multinode-129632 -- rollout status deployment/busybox: (2.470019149s)
multinode_test.go:505: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-129632 -- get pods -o jsonpath='{.items[*].status.podIP}'
multinode_test.go:528: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-129632 -- get pods -o jsonpath='{.items[*].metadata.name}'
multinode_test.go:536: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-129632 -- exec busybox-7b57f96db7-7hgrn -- nslookup kubernetes.io
multinode_test.go:536: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-129632 -- exec busybox-7b57f96db7-hnz29 -- nslookup kubernetes.io
multinode_test.go:546: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-129632 -- exec busybox-7b57f96db7-7hgrn -- nslookup kubernetes.default
multinode_test.go:546: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-129632 -- exec busybox-7b57f96db7-hnz29 -- nslookup kubernetes.default
multinode_test.go:554: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-129632 -- exec busybox-7b57f96db7-7hgrn -- nslookup kubernetes.default.svc.cluster.local
multinode_test.go:554: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-129632 -- exec busybox-7b57f96db7-hnz29 -- nslookup kubernetes.default.svc.cluster.local
--- PASS: TestMultiNode/serial/DeployApp2Nodes (4.06s)

                                                
                                    
x
+
TestMultiNode/serial/PingHostFrom2Pods (0.89s)

                                                
                                                
=== RUN   TestMultiNode/serial/PingHostFrom2Pods
multinode_test.go:564: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-129632 -- get pods -o jsonpath='{.items[*].metadata.name}'
multinode_test.go:572: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-129632 -- exec busybox-7b57f96db7-7hgrn -- sh -c "nslookup host.minikube.internal | awk 'NR==5' | cut -d' ' -f3"
multinode_test.go:583: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-129632 -- exec busybox-7b57f96db7-7hgrn -- sh -c "ping -c 1 192.168.39.1"
multinode_test.go:572: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-129632 -- exec busybox-7b57f96db7-hnz29 -- sh -c "nslookup host.minikube.internal | awk 'NR==5' | cut -d' ' -f3"
multinode_test.go:583: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-129632 -- exec busybox-7b57f96db7-hnz29 -- sh -c "ping -c 1 192.168.39.1"
--- PASS: TestMultiNode/serial/PingHostFrom2Pods (0.89s)

                                                
                                    
x
+
TestMultiNode/serial/AddNode (44.7s)

                                                
                                                
=== RUN   TestMultiNode/serial/AddNode
multinode_test.go:121: (dbg) Run:  out/minikube-linux-amd64 node add -p multinode-129632 -v=5 --alsologtostderr
multinode_test.go:121: (dbg) Done: out/minikube-linux-amd64 node add -p multinode-129632 -v=5 --alsologtostderr: (44.290014295s)
multinode_test.go:127: (dbg) Run:  out/minikube-linux-amd64 -p multinode-129632 status --alsologtostderr
--- PASS: TestMultiNode/serial/AddNode (44.70s)

                                                
                                    
x
+
TestMultiNode/serial/MultiNodeLabels (0.06s)

                                                
                                                
=== RUN   TestMultiNode/serial/MultiNodeLabels
multinode_test.go:221: (dbg) Run:  kubectl --context multinode-129632 get nodes -o "jsonpath=[{range .items[*]}{.metadata.labels},{end}]"
--- PASS: TestMultiNode/serial/MultiNodeLabels (0.06s)

                                                
                                    
x
+
TestMultiNode/serial/ProfileList (0.44s)

                                                
                                                
=== RUN   TestMultiNode/serial/ProfileList
multinode_test.go:143: (dbg) Run:  out/minikube-linux-amd64 profile list --output json
--- PASS: TestMultiNode/serial/ProfileList (0.44s)

                                                
                                    
x
+
TestMultiNode/serial/CopyFile (5.79s)

                                                
                                                
=== RUN   TestMultiNode/serial/CopyFile
multinode_test.go:184: (dbg) Run:  out/minikube-linux-amd64 -p multinode-129632 status --output json --alsologtostderr
helpers_test.go:574: (dbg) Run:  out/minikube-linux-amd64 -p multinode-129632 cp testdata/cp-test.txt multinode-129632:/home/docker/cp-test.txt
helpers_test.go:552: (dbg) Run:  out/minikube-linux-amd64 -p multinode-129632 ssh -n multinode-129632 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:574: (dbg) Run:  out/minikube-linux-amd64 -p multinode-129632 cp multinode-129632:/home/docker/cp-test.txt /tmp/TestMultiNodeserialCopyFile178121116/001/cp-test_multinode-129632.txt
helpers_test.go:552: (dbg) Run:  out/minikube-linux-amd64 -p multinode-129632 ssh -n multinode-129632 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:574: (dbg) Run:  out/minikube-linux-amd64 -p multinode-129632 cp multinode-129632:/home/docker/cp-test.txt multinode-129632-m02:/home/docker/cp-test_multinode-129632_multinode-129632-m02.txt
helpers_test.go:552: (dbg) Run:  out/minikube-linux-amd64 -p multinode-129632 ssh -n multinode-129632 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:552: (dbg) Run:  out/minikube-linux-amd64 -p multinode-129632 ssh -n multinode-129632-m02 "sudo cat /home/docker/cp-test_multinode-129632_multinode-129632-m02.txt"
helpers_test.go:574: (dbg) Run:  out/minikube-linux-amd64 -p multinode-129632 cp multinode-129632:/home/docker/cp-test.txt multinode-129632-m03:/home/docker/cp-test_multinode-129632_multinode-129632-m03.txt
helpers_test.go:552: (dbg) Run:  out/minikube-linux-amd64 -p multinode-129632 ssh -n multinode-129632 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:552: (dbg) Run:  out/minikube-linux-amd64 -p multinode-129632 ssh -n multinode-129632-m03 "sudo cat /home/docker/cp-test_multinode-129632_multinode-129632-m03.txt"
helpers_test.go:574: (dbg) Run:  out/minikube-linux-amd64 -p multinode-129632 cp testdata/cp-test.txt multinode-129632-m02:/home/docker/cp-test.txt
helpers_test.go:552: (dbg) Run:  out/minikube-linux-amd64 -p multinode-129632 ssh -n multinode-129632-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:574: (dbg) Run:  out/minikube-linux-amd64 -p multinode-129632 cp multinode-129632-m02:/home/docker/cp-test.txt /tmp/TestMultiNodeserialCopyFile178121116/001/cp-test_multinode-129632-m02.txt
helpers_test.go:552: (dbg) Run:  out/minikube-linux-amd64 -p multinode-129632 ssh -n multinode-129632-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:574: (dbg) Run:  out/minikube-linux-amd64 -p multinode-129632 cp multinode-129632-m02:/home/docker/cp-test.txt multinode-129632:/home/docker/cp-test_multinode-129632-m02_multinode-129632.txt
helpers_test.go:552: (dbg) Run:  out/minikube-linux-amd64 -p multinode-129632 ssh -n multinode-129632-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:552: (dbg) Run:  out/minikube-linux-amd64 -p multinode-129632 ssh -n multinode-129632 "sudo cat /home/docker/cp-test_multinode-129632-m02_multinode-129632.txt"
helpers_test.go:574: (dbg) Run:  out/minikube-linux-amd64 -p multinode-129632 cp multinode-129632-m02:/home/docker/cp-test.txt multinode-129632-m03:/home/docker/cp-test_multinode-129632-m02_multinode-129632-m03.txt
helpers_test.go:552: (dbg) Run:  out/minikube-linux-amd64 -p multinode-129632 ssh -n multinode-129632-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:552: (dbg) Run:  out/minikube-linux-amd64 -p multinode-129632 ssh -n multinode-129632-m03 "sudo cat /home/docker/cp-test_multinode-129632-m02_multinode-129632-m03.txt"
helpers_test.go:574: (dbg) Run:  out/minikube-linux-amd64 -p multinode-129632 cp testdata/cp-test.txt multinode-129632-m03:/home/docker/cp-test.txt
helpers_test.go:552: (dbg) Run:  out/minikube-linux-amd64 -p multinode-129632 ssh -n multinode-129632-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:574: (dbg) Run:  out/minikube-linux-amd64 -p multinode-129632 cp multinode-129632-m03:/home/docker/cp-test.txt /tmp/TestMultiNodeserialCopyFile178121116/001/cp-test_multinode-129632-m03.txt
helpers_test.go:552: (dbg) Run:  out/minikube-linux-amd64 -p multinode-129632 ssh -n multinode-129632-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:574: (dbg) Run:  out/minikube-linux-amd64 -p multinode-129632 cp multinode-129632-m03:/home/docker/cp-test.txt multinode-129632:/home/docker/cp-test_multinode-129632-m03_multinode-129632.txt
helpers_test.go:552: (dbg) Run:  out/minikube-linux-amd64 -p multinode-129632 ssh -n multinode-129632-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:552: (dbg) Run:  out/minikube-linux-amd64 -p multinode-129632 ssh -n multinode-129632 "sudo cat /home/docker/cp-test_multinode-129632-m03_multinode-129632.txt"
helpers_test.go:574: (dbg) Run:  out/minikube-linux-amd64 -p multinode-129632 cp multinode-129632-m03:/home/docker/cp-test.txt multinode-129632-m02:/home/docker/cp-test_multinode-129632-m03_multinode-129632-m02.txt
helpers_test.go:552: (dbg) Run:  out/minikube-linux-amd64 -p multinode-129632 ssh -n multinode-129632-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:552: (dbg) Run:  out/minikube-linux-amd64 -p multinode-129632 ssh -n multinode-129632-m02 "sudo cat /home/docker/cp-test_multinode-129632-m03_multinode-129632-m02.txt"
--- PASS: TestMultiNode/serial/CopyFile (5.79s)

                                                
                                    
x
+
TestMultiNode/serial/StopNode (2.46s)

                                                
                                                
=== RUN   TestMultiNode/serial/StopNode
multinode_test.go:248: (dbg) Run:  out/minikube-linux-amd64 -p multinode-129632 node stop m03
multinode_test.go:248: (dbg) Done: out/minikube-linux-amd64 -p multinode-129632 node stop m03: (1.825508849s)
multinode_test.go:254: (dbg) Run:  out/minikube-linux-amd64 -p multinode-129632 status
multinode_test.go:254: (dbg) Non-zero exit: out/minikube-linux-amd64 -p multinode-129632 status: exit status 7 (319.775299ms)

                                                
                                                
-- stdout --
	multinode-129632
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Running
	kubeconfig: Configured
	
	multinode-129632-m02
	type: Worker
	host: Running
	kubelet: Running
	
	multinode-129632-m03
	type: Worker
	host: Stopped
	kubelet: Stopped
	

                                                
                                                
-- /stdout --
multinode_test.go:261: (dbg) Run:  out/minikube-linux-amd64 -p multinode-129632 status --alsologtostderr
multinode_test.go:261: (dbg) Non-zero exit: out/minikube-linux-amd64 -p multinode-129632 status --alsologtostderr: exit status 7 (315.79092ms)

                                                
                                                
-- stdout --
	multinode-129632
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Running
	kubeconfig: Configured
	
	multinode-129632-m02
	type: Worker
	host: Running
	kubelet: Running
	
	multinode-129632-m03
	type: Worker
	host: Stopped
	kubelet: Stopped
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1220 01:51:19.701374   26425 out.go:360] Setting OutFile to fd 1 ...
	I1220 01:51:19.701734   26425 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1220 01:51:19.701743   26425 out.go:374] Setting ErrFile to fd 2...
	I1220 01:51:19.701747   26425 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1220 01:51:19.701926   26425 root.go:338] Updating PATH: /home/minitest/minikube-integration/7cd9f41b7421760cf1f1eaa8725bdb975037b06d-7160/.minikube/bin
	I1220 01:51:19.702092   26425 out.go:368] Setting JSON to false
	I1220 01:51:19.702118   26425 mustload.go:66] Loading cluster: multinode-129632
	I1220 01:51:19.702195   26425 notify.go:221] Checking for updates...
	I1220 01:51:19.702514   26425 config.go:182] Loaded profile config "multinode-129632": Driver=kvm2, ContainerRuntime=docker, KubernetesVersion=v1.34.3
	I1220 01:51:19.702528   26425 status.go:174] checking status of multinode-129632 ...
	I1220 01:51:19.704587   26425 status.go:371] multinode-129632 host status = "Running" (err=<nil>)
	I1220 01:51:19.704604   26425 host.go:66] Checking if "multinode-129632" exists ...
	I1220 01:51:19.706992   26425 main.go:144] libmachine: domain multinode-129632 has defined MAC address 52:54:00:be:b0:3a in network mk-multinode-129632
	I1220 01:51:19.707405   26425 main.go:144] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:be:b0:3a", ip: ""} in network mk-multinode-129632: {Iface:virbr1 ExpiryTime:2025-12-20 02:48:19 +0000 UTC Type:0 Mac:52:54:00:be:b0:3a Iaid: IPaddr:192.168.39.112 Prefix:24 Hostname:multinode-129632 Clientid:01:52:54:00:be:b0:3a}
	I1220 01:51:19.707437   26425 main.go:144] libmachine: domain multinode-129632 has defined IP address 192.168.39.112 and MAC address 52:54:00:be:b0:3a in network mk-multinode-129632
	I1220 01:51:19.707561   26425 host.go:66] Checking if "multinode-129632" exists ...
	I1220 01:51:19.707783   26425 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1220 01:51:19.710008   26425 main.go:144] libmachine: domain multinode-129632 has defined MAC address 52:54:00:be:b0:3a in network mk-multinode-129632
	I1220 01:51:19.710386   26425 main.go:144] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:be:b0:3a", ip: ""} in network mk-multinode-129632: {Iface:virbr1 ExpiryTime:2025-12-20 02:48:19 +0000 UTC Type:0 Mac:52:54:00:be:b0:3a Iaid: IPaddr:192.168.39.112 Prefix:24 Hostname:multinode-129632 Clientid:01:52:54:00:be:b0:3a}
	I1220 01:51:19.710408   26425 main.go:144] libmachine: domain multinode-129632 has defined IP address 192.168.39.112 and MAC address 52:54:00:be:b0:3a in network mk-multinode-129632
	I1220 01:51:19.710547   26425 sshutil.go:53] new ssh client: &{IP:192.168.39.112 Port:22 SSHKeyPath:/home/minitest/minikube-integration/7cd9f41b7421760cf1f1eaa8725bdb975037b06d-7160/.minikube/machines/multinode-129632/id_rsa Username:docker}
	I1220 01:51:19.787164   26425 ssh_runner.go:195] Run: systemctl --version
	I1220 01:51:19.793533   26425 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1220 01:51:19.812076   26425 kubeconfig.go:125] found "multinode-129632" server: "https://192.168.39.112:8443"
	I1220 01:51:19.812116   26425 api_server.go:166] Checking apiserver status ...
	I1220 01:51:19.812167   26425 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1220 01:51:19.832506   26425 ssh_runner.go:195] Run: sudo egrep ^[0-9]+:freezer: /proc/2467/cgroup
	W1220 01:51:19.843532   26425 api_server.go:177] unable to find freezer cgroup: sudo egrep ^[0-9]+:freezer: /proc/2467/cgroup: Process exited with status 1
	stdout:
	
	stderr:
	I1220 01:51:19.843608   26425 ssh_runner.go:195] Run: ls
	I1220 01:51:19.849640   26425 api_server.go:253] Checking apiserver healthz at https://192.168.39.112:8443/healthz ...
	I1220 01:51:19.854501   26425 api_server.go:279] https://192.168.39.112:8443/healthz returned 200:
	ok
	I1220 01:51:19.854522   26425 status.go:463] multinode-129632 apiserver status = Running (err=<nil>)
	I1220 01:51:19.854532   26425 status.go:176] multinode-129632 status: &{Name:multinode-129632 Host:Running Kubelet:Running APIServer:Running Kubeconfig:Configured Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I1220 01:51:19.854558   26425 status.go:174] checking status of multinode-129632-m02 ...
	I1220 01:51:19.856357   26425 status.go:371] multinode-129632-m02 host status = "Running" (err=<nil>)
	I1220 01:51:19.856376   26425 host.go:66] Checking if "multinode-129632-m02" exists ...
	I1220 01:51:19.859055   26425 main.go:144] libmachine: domain multinode-129632-m02 has defined MAC address 52:54:00:b5:55:f3 in network mk-multinode-129632
	I1220 01:51:19.859475   26425 main.go:144] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:b5:55:f3", ip: ""} in network mk-multinode-129632: {Iface:virbr1 ExpiryTime:2025-12-20 02:49:23 +0000 UTC Type:0 Mac:52:54:00:b5:55:f3 Iaid: IPaddr:192.168.39.227 Prefix:24 Hostname:multinode-129632-m02 Clientid:01:52:54:00:b5:55:f3}
	I1220 01:51:19.859496   26425 main.go:144] libmachine: domain multinode-129632-m02 has defined IP address 192.168.39.227 and MAC address 52:54:00:b5:55:f3 in network mk-multinode-129632
	I1220 01:51:19.859614   26425 host.go:66] Checking if "multinode-129632-m02" exists ...
	I1220 01:51:19.859832   26425 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1220 01:51:19.861904   26425 main.go:144] libmachine: domain multinode-129632-m02 has defined MAC address 52:54:00:b5:55:f3 in network mk-multinode-129632
	I1220 01:51:19.862298   26425 main.go:144] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:b5:55:f3", ip: ""} in network mk-multinode-129632: {Iface:virbr1 ExpiryTime:2025-12-20 02:49:23 +0000 UTC Type:0 Mac:52:54:00:b5:55:f3 Iaid: IPaddr:192.168.39.227 Prefix:24 Hostname:multinode-129632-m02 Clientid:01:52:54:00:b5:55:f3}
	I1220 01:51:19.862319   26425 main.go:144] libmachine: domain multinode-129632-m02 has defined IP address 192.168.39.227 and MAC address 52:54:00:b5:55:f3 in network mk-multinode-129632
	I1220 01:51:19.862452   26425 sshutil.go:53] new ssh client: &{IP:192.168.39.227 Port:22 SSHKeyPath:/home/minitest/minikube-integration/7cd9f41b7421760cf1f1eaa8725bdb975037b06d-7160/.minikube/machines/multinode-129632-m02/id_rsa Username:docker}
	I1220 01:51:19.942103   26425 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1220 01:51:19.957631   26425 status.go:176] multinode-129632-m02 status: &{Name:multinode-129632-m02 Host:Running Kubelet:Running APIServer:Irrelevant Kubeconfig:Irrelevant Worker:true TimeToStop: DockerEnv: PodManEnv:}
	I1220 01:51:19.957675   26425 status.go:174] checking status of multinode-129632-m03 ...
	I1220 01:51:19.959400   26425 status.go:371] multinode-129632-m03 host status = "Stopped" (err=<nil>)
	I1220 01:51:19.959419   26425 status.go:384] host is not running, skipping remaining checks
	I1220 01:51:19.959424   26425 status.go:176] multinode-129632-m03 status: &{Name:multinode-129632-m03 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:true TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
--- PASS: TestMultiNode/serial/StopNode (2.46s)

                                                
                                    
x
+
TestMultiNode/serial/StartAfterStop (38.37s)

                                                
                                                
=== RUN   TestMultiNode/serial/StartAfterStop
multinode_test.go:282: (dbg) Run:  out/minikube-linux-amd64 -p multinode-129632 node start m03 -v=5 --alsologtostderr
E1220 01:51:34.991652   13018 cert_rotation.go:172] "Loading client cert failed" err="open /home/minitest/minikube-integration/7cd9f41b7421760cf1f1eaa8725bdb975037b06d-7160/.minikube/profiles/functional-281340/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1220 01:51:41.946637   13018 cert_rotation.go:172] "Loading client cert failed" err="open /home/minitest/minikube-integration/7cd9f41b7421760cf1f1eaa8725bdb975037b06d-7160/.minikube/profiles/functional-562123/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
multinode_test.go:282: (dbg) Done: out/minikube-linux-amd64 -p multinode-129632 node start m03 -v=5 --alsologtostderr: (37.877511294s)
multinode_test.go:290: (dbg) Run:  out/minikube-linux-amd64 -p multinode-129632 status -v=5 --alsologtostderr
multinode_test.go:306: (dbg) Run:  kubectl get nodes
--- PASS: TestMultiNode/serial/StartAfterStop (38.37s)

                                                
                                    
x
+
TestMultiNode/serial/RestartKeepsNodes (164.59s)

                                                
                                                
=== RUN   TestMultiNode/serial/RestartKeepsNodes
multinode_test.go:314: (dbg) Run:  out/minikube-linux-amd64 node list -p multinode-129632
multinode_test.go:321: (dbg) Run:  out/minikube-linux-amd64 stop -p multinode-129632
multinode_test.go:321: (dbg) Done: out/minikube-linux-amd64 stop -p multinode-129632: (28.377326467s)
multinode_test.go:326: (dbg) Run:  out/minikube-linux-amd64 start -p multinode-129632 --wait=true -v=5 --alsologtostderr
multinode_test.go:326: (dbg) Done: out/minikube-linux-amd64 start -p multinode-129632 --wait=true -v=5 --alsologtostderr: (2m16.095818625s)
multinode_test.go:331: (dbg) Run:  out/minikube-linux-amd64 node list -p multinode-129632
--- PASS: TestMultiNode/serial/RestartKeepsNodes (164.59s)

                                                
                                    
x
+
TestMultiNode/serial/DeleteNode (1.99s)

                                                
                                                
=== RUN   TestMultiNode/serial/DeleteNode
multinode_test.go:416: (dbg) Run:  out/minikube-linux-amd64 -p multinode-129632 node delete m03
multinode_test.go:416: (dbg) Done: out/minikube-linux-amd64 -p multinode-129632 node delete m03: (1.549917428s)
multinode_test.go:422: (dbg) Run:  out/minikube-linux-amd64 -p multinode-129632 status --alsologtostderr
multinode_test.go:436: (dbg) Run:  kubectl get nodes
multinode_test.go:444: (dbg) Run:  kubectl get nodes -o "go-template='{{range .items}}{{range .status.conditions}}{{if eq .type "Ready"}} {{.status}}{{"\n"}}{{end}}{{end}}{{end}}'"
--- PASS: TestMultiNode/serial/DeleteNode (1.99s)

                                                
                                    
x
+
TestMultiNode/serial/StopMultiNode (26.71s)

                                                
                                                
=== RUN   TestMultiNode/serial/StopMultiNode
multinode_test.go:345: (dbg) Run:  out/minikube-linux-amd64 -p multinode-129632 stop
E1220 01:54:57.551870   13018 cert_rotation.go:172] "Loading client cert failed" err="open /home/minitest/minikube-integration/7cd9f41b7421760cf1f1eaa8725bdb975037b06d-7160/.minikube/profiles/addons-616728/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
multinode_test.go:345: (dbg) Done: out/minikube-linux-amd64 -p multinode-129632 stop: (26.583543465s)
multinode_test.go:351: (dbg) Run:  out/minikube-linux-amd64 -p multinode-129632 status
multinode_test.go:351: (dbg) Non-zero exit: out/minikube-linux-amd64 -p multinode-129632 status: exit status 7 (61.560453ms)

                                                
                                                
-- stdout --
	multinode-129632
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	
	multinode-129632-m02
	type: Worker
	host: Stopped
	kubelet: Stopped
	

                                                
                                                
-- /stdout --
multinode_test.go:358: (dbg) Run:  out/minikube-linux-amd64 -p multinode-129632 status --alsologtostderr
multinode_test.go:358: (dbg) Non-zero exit: out/minikube-linux-amd64 -p multinode-129632 status --alsologtostderr: exit status 7 (61.940148ms)

                                                
                                                
-- stdout --
	multinode-129632
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	
	multinode-129632-m02
	type: Worker
	host: Stopped
	kubelet: Stopped
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1220 01:55:11.616546   27086 out.go:360] Setting OutFile to fd 1 ...
	I1220 01:55:11.616657   27086 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1220 01:55:11.616664   27086 out.go:374] Setting ErrFile to fd 2...
	I1220 01:55:11.616671   27086 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1220 01:55:11.616875   27086 root.go:338] Updating PATH: /home/minitest/minikube-integration/7cd9f41b7421760cf1f1eaa8725bdb975037b06d-7160/.minikube/bin
	I1220 01:55:11.617031   27086 out.go:368] Setting JSON to false
	I1220 01:55:11.617055   27086 mustload.go:66] Loading cluster: multinode-129632
	I1220 01:55:11.617145   27086 notify.go:221] Checking for updates...
	I1220 01:55:11.617453   27086 config.go:182] Loaded profile config "multinode-129632": Driver=kvm2, ContainerRuntime=docker, KubernetesVersion=v1.34.3
	I1220 01:55:11.617467   27086 status.go:174] checking status of multinode-129632 ...
	I1220 01:55:11.619965   27086 status.go:371] multinode-129632 host status = "Stopped" (err=<nil>)
	I1220 01:55:11.619982   27086 status.go:384] host is not running, skipping remaining checks
	I1220 01:55:11.619987   27086 status.go:176] multinode-129632 status: &{Name:multinode-129632 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I1220 01:55:11.620003   27086 status.go:174] checking status of multinode-129632-m02 ...
	I1220 01:55:11.621408   27086 status.go:371] multinode-129632-m02 host status = "Stopped" (err=<nil>)
	I1220 01:55:11.621421   27086 status.go:384] host is not running, skipping remaining checks
	I1220 01:55:11.621425   27086 status.go:176] multinode-129632-m02 status: &{Name:multinode-129632-m02 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:true TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
--- PASS: TestMultiNode/serial/StopMultiNode (26.71s)

                                                
                                    
x
+
TestMultiNode/serial/RestartMultiNode (84.91s)

                                                
                                                
=== RUN   TestMultiNode/serial/RestartMultiNode
multinode_test.go:376: (dbg) Run:  out/minikube-linux-amd64 start -p multinode-129632 --wait=true -v=5 --alsologtostderr --driver=kvm2  --container-runtime=docker
E1220 01:55:18.901104   13018 cert_rotation.go:172] "Loading client cert failed" err="open /home/minitest/minikube-integration/7cd9f41b7421760cf1f1eaa8725bdb975037b06d-7160/.minikube/profiles/functional-562123/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1220 01:56:34.991754   13018 cert_rotation.go:172] "Loading client cert failed" err="open /home/minitest/minikube-integration/7cd9f41b7421760cf1f1eaa8725bdb975037b06d-7160/.minikube/profiles/functional-281340/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
multinode_test.go:376: (dbg) Done: out/minikube-linux-amd64 start -p multinode-129632 --wait=true -v=5 --alsologtostderr --driver=kvm2  --container-runtime=docker: (1m24.458169171s)
multinode_test.go:382: (dbg) Run:  out/minikube-linux-amd64 -p multinode-129632 status --alsologtostderr
multinode_test.go:396: (dbg) Run:  kubectl get nodes
multinode_test.go:404: (dbg) Run:  kubectl get nodes -o "go-template='{{range .items}}{{range .status.conditions}}{{if eq .type "Ready"}} {{.status}}{{"\n"}}{{end}}{{end}}{{end}}'"
--- PASS: TestMultiNode/serial/RestartMultiNode (84.91s)

                                                
                                    
x
+
TestMultiNode/serial/ValidateNameConflict (40.31s)

                                                
                                                
=== RUN   TestMultiNode/serial/ValidateNameConflict
multinode_test.go:455: (dbg) Run:  out/minikube-linux-amd64 node list -p multinode-129632
multinode_test.go:464: (dbg) Run:  out/minikube-linux-amd64 start -p multinode-129632-m02 --driver=kvm2  --container-runtime=docker
multinode_test.go:464: (dbg) Non-zero exit: out/minikube-linux-amd64 start -p multinode-129632-m02 --driver=kvm2  --container-runtime=docker: exit status 14 (79.903562ms)

                                                
                                                
-- stdout --
	* [multinode-129632-m02] minikube v1.37.0 on Ubuntu 24.04 (kvm/amd64)
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/minitest/minikube-integration/7cd9f41b7421760cf1f1eaa8725bdb975037b06d-7160/kubeconfig
	  - MINIKUBE_HOME=/home/minitest/minikube-integration/7cd9f41b7421760cf1f1eaa8725bdb975037b06d-7160/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-amd64
	  - MINIKUBE_FORCE_SYSTEMD=
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	! Profile name 'multinode-129632-m02' is duplicated with machine name 'multinode-129632-m02' in profile 'multinode-129632'
	X Exiting due to MK_USAGE: Profile name should be unique

                                                
                                                
** /stderr **
multinode_test.go:472: (dbg) Run:  out/minikube-linux-amd64 start -p multinode-129632-m03 --driver=kvm2  --container-runtime=docker
multinode_test.go:472: (dbg) Done: out/minikube-linux-amd64 start -p multinode-129632-m03 --driver=kvm2  --container-runtime=docker: (39.239796516s)
multinode_test.go:479: (dbg) Run:  out/minikube-linux-amd64 node add -p multinode-129632
multinode_test.go:479: (dbg) Non-zero exit: out/minikube-linux-amd64 node add -p multinode-129632: exit status 80 (189.521403ms)

                                                
                                                
-- stdout --
	* Adding node m03 to cluster multinode-129632 as [worker]
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to GUEST_NODE_ADD: failed to add node: Node multinode-129632-m03 already exists in multinode-129632-m03 profile
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_node_2fc72427b295733063c022c0c069e7a2f5be6375_0.log                    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
multinode_test.go:484: (dbg) Run:  out/minikube-linux-amd64 delete -p multinode-129632-m03
--- PASS: TestMultiNode/serial/ValidateNameConflict (40.31s)

                                                
                                    
x
+
TestScheduledStopUnix (110.38s)

                                                
                                                
=== RUN   TestScheduledStopUnix
scheduled_stop_test.go:128: (dbg) Run:  out/minikube-linux-amd64 start -p scheduled-stop-739289 --memory=3072 --driver=kvm2  --container-runtime=docker
scheduled_stop_test.go:128: (dbg) Done: out/minikube-linux-amd64 start -p scheduled-stop-739289 --memory=3072 --driver=kvm2  --container-runtime=docker: (38.836290607s)
scheduled_stop_test.go:137: (dbg) Run:  out/minikube-linux-amd64 stop -p scheduled-stop-739289 --schedule 5m -v=5 --alsologtostderr
minikube stop output:

                                                
                                                
** stderr ** 
	I1220 01:57:57.112573   27917 out.go:360] Setting OutFile to fd 1 ...
	I1220 01:57:57.112684   27917 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1220 01:57:57.112692   27917 out.go:374] Setting ErrFile to fd 2...
	I1220 01:57:57.112696   27917 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1220 01:57:57.112900   27917 root.go:338] Updating PATH: /home/minitest/minikube-integration/7cd9f41b7421760cf1f1eaa8725bdb975037b06d-7160/.minikube/bin
	I1220 01:57:57.113132   27917 out.go:368] Setting JSON to false
	I1220 01:57:57.113230   27917 mustload.go:66] Loading cluster: scheduled-stop-739289
	I1220 01:57:57.113564   27917 config.go:182] Loaded profile config "scheduled-stop-739289": Driver=kvm2, ContainerRuntime=docker, KubernetesVersion=v1.34.3
	I1220 01:57:57.113660   27917 profile.go:143] Saving config to /home/minitest/minikube-integration/7cd9f41b7421760cf1f1eaa8725bdb975037b06d-7160/.minikube/profiles/scheduled-stop-739289/config.json ...
	I1220 01:57:57.113838   27917 mustload.go:66] Loading cluster: scheduled-stop-739289
	I1220 01:57:57.113929   27917 config.go:182] Loaded profile config "scheduled-stop-739289": Driver=kvm2, ContainerRuntime=docker, KubernetesVersion=v1.34.3

                                                
                                                
** /stderr **
scheduled_stop_test.go:204: (dbg) Run:  out/minikube-linux-amd64 status --format={{.TimeToStop}} -p scheduled-stop-739289 -n scheduled-stop-739289
scheduled_stop_test.go:172: signal error was:  <nil>
scheduled_stop_test.go:137: (dbg) Run:  out/minikube-linux-amd64 stop -p scheduled-stop-739289 --schedule 15s -v=5 --alsologtostderr
minikube stop output:

                                                
                                                
** stderr ** 
	I1220 01:57:57.386076   27965 out.go:360] Setting OutFile to fd 1 ...
	I1220 01:57:57.386183   27965 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1220 01:57:57.386193   27965 out.go:374] Setting ErrFile to fd 2...
	I1220 01:57:57.386213   27965 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1220 01:57:57.386410   27965 root.go:338] Updating PATH: /home/minitest/minikube-integration/7cd9f41b7421760cf1f1eaa8725bdb975037b06d-7160/.minikube/bin
	I1220 01:57:57.386650   27965 out.go:368] Setting JSON to false
	I1220 01:57:57.386848   27965 daemonize_unix.go:73] killing process 27954 as it is an old scheduled stop
	I1220 01:57:57.386958   27965 mustload.go:66] Loading cluster: scheduled-stop-739289
	I1220 01:57:57.387480   27965 config.go:182] Loaded profile config "scheduled-stop-739289": Driver=kvm2, ContainerRuntime=docker, KubernetesVersion=v1.34.3
	I1220 01:57:57.387582   27965 profile.go:143] Saving config to /home/minitest/minikube-integration/7cd9f41b7421760cf1f1eaa8725bdb975037b06d-7160/.minikube/profiles/scheduled-stop-739289/config.json ...
	I1220 01:57:57.387820   27965 mustload.go:66] Loading cluster: scheduled-stop-739289
	I1220 01:57:57.387965   27965 config.go:182] Loaded profile config "scheduled-stop-739289": Driver=kvm2, ContainerRuntime=docker, KubernetesVersion=v1.34.3

                                                
                                                
** /stderr **
scheduled_stop_test.go:172: signal error was:  os: process already finished
I1220 01:57:57.393078   13018 retry.go:31] will retry after 87.268µs: open /home/minitest/minikube-integration/7cd9f41b7421760cf1f1eaa8725bdb975037b06d-7160/.minikube/profiles/scheduled-stop-739289/pid: no such file or directory
I1220 01:57:57.394249   13018 retry.go:31] will retry after 211.322µs: open /home/minitest/minikube-integration/7cd9f41b7421760cf1f1eaa8725bdb975037b06d-7160/.minikube/profiles/scheduled-stop-739289/pid: no such file or directory
I1220 01:57:57.395406   13018 retry.go:31] will retry after 125.435µs: open /home/minitest/minikube-integration/7cd9f41b7421760cf1f1eaa8725bdb975037b06d-7160/.minikube/profiles/scheduled-stop-739289/pid: no such file or directory
I1220 01:57:57.396563   13018 retry.go:31] will retry after 295.04µs: open /home/minitest/minikube-integration/7cd9f41b7421760cf1f1eaa8725bdb975037b06d-7160/.minikube/profiles/scheduled-stop-739289/pid: no such file or directory
I1220 01:57:57.397699   13018 retry.go:31] will retry after 423.397µs: open /home/minitest/minikube-integration/7cd9f41b7421760cf1f1eaa8725bdb975037b06d-7160/.minikube/profiles/scheduled-stop-739289/pid: no such file or directory
I1220 01:57:57.398830   13018 retry.go:31] will retry after 875.51µs: open /home/minitest/minikube-integration/7cd9f41b7421760cf1f1eaa8725bdb975037b06d-7160/.minikube/profiles/scheduled-stop-739289/pid: no such file or directory
I1220 01:57:57.399970   13018 retry.go:31] will retry after 1.380839ms: open /home/minitest/minikube-integration/7cd9f41b7421760cf1f1eaa8725bdb975037b06d-7160/.minikube/profiles/scheduled-stop-739289/pid: no such file or directory
I1220 01:57:57.402165   13018 retry.go:31] will retry after 2.510328ms: open /home/minitest/minikube-integration/7cd9f41b7421760cf1f1eaa8725bdb975037b06d-7160/.minikube/profiles/scheduled-stop-739289/pid: no such file or directory
I1220 01:57:57.405343   13018 retry.go:31] will retry after 1.755182ms: open /home/minitest/minikube-integration/7cd9f41b7421760cf1f1eaa8725bdb975037b06d-7160/.minikube/profiles/scheduled-stop-739289/pid: no such file or directory
I1220 01:57:57.407542   13018 retry.go:31] will retry after 5.301413ms: open /home/minitest/minikube-integration/7cd9f41b7421760cf1f1eaa8725bdb975037b06d-7160/.minikube/profiles/scheduled-stop-739289/pid: no such file or directory
I1220 01:57:57.413752   13018 retry.go:31] will retry after 3.141993ms: open /home/minitest/minikube-integration/7cd9f41b7421760cf1f1eaa8725bdb975037b06d-7160/.minikube/profiles/scheduled-stop-739289/pid: no such file or directory
I1220 01:57:57.418011   13018 retry.go:31] will retry after 8.000335ms: open /home/minitest/minikube-integration/7cd9f41b7421760cf1f1eaa8725bdb975037b06d-7160/.minikube/profiles/scheduled-stop-739289/pid: no such file or directory
I1220 01:57:57.426796   13018 retry.go:31] will retry after 15.994987ms: open /home/minitest/minikube-integration/7cd9f41b7421760cf1f1eaa8725bdb975037b06d-7160/.minikube/profiles/scheduled-stop-739289/pid: no such file or directory
I1220 01:57:57.442996   13018 retry.go:31] will retry after 27.501549ms: open /home/minitest/minikube-integration/7cd9f41b7421760cf1f1eaa8725bdb975037b06d-7160/.minikube/profiles/scheduled-stop-739289/pid: no such file or directory
I1220 01:57:57.471245   13018 retry.go:31] will retry after 23.850016ms: open /home/minitest/minikube-integration/7cd9f41b7421760cf1f1eaa8725bdb975037b06d-7160/.minikube/profiles/scheduled-stop-739289/pid: no such file or directory
scheduled_stop_test.go:137: (dbg) Run:  out/minikube-linux-amd64 stop -p scheduled-stop-739289 --cancel-scheduled
minikube stop output:

                                                
                                                
-- stdout --
	* All existing scheduled stops cancelled

                                                
                                                
-- /stdout --
E1220 01:58:00.604242   13018 cert_rotation.go:172] "Loading client cert failed" err="open /home/minitest/minikube-integration/7cd9f41b7421760cf1f1eaa8725bdb975037b06d-7160/.minikube/profiles/addons-616728/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
scheduled_stop_test.go:189: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p scheduled-stop-739289 -n scheduled-stop-739289
scheduled_stop_test.go:218: (dbg) Run:  out/minikube-linux-amd64 status -p scheduled-stop-739289
scheduled_stop_test.go:137: (dbg) Run:  out/minikube-linux-amd64 stop -p scheduled-stop-739289 --schedule 15s -v=5 --alsologtostderr
minikube stop output:

                                                
                                                
** stderr ** 
	I1220 01:58:23.039548   28041 out.go:360] Setting OutFile to fd 1 ...
	I1220 01:58:23.039804   28041 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1220 01:58:23.039817   28041 out.go:374] Setting ErrFile to fd 2...
	I1220 01:58:23.039823   28041 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1220 01:58:23.040093   28041 root.go:338] Updating PATH: /home/minitest/minikube-integration/7cd9f41b7421760cf1f1eaa8725bdb975037b06d-7160/.minikube/bin
	I1220 01:58:23.040424   28041 out.go:368] Setting JSON to false
	I1220 01:58:23.040524   28041 mustload.go:66] Loading cluster: scheduled-stop-739289
	I1220 01:58:23.040893   28041 config.go:182] Loaded profile config "scheduled-stop-739289": Driver=kvm2, ContainerRuntime=docker, KubernetesVersion=v1.34.3
	I1220 01:58:23.040984   28041 profile.go:143] Saving config to /home/minitest/minikube-integration/7cd9f41b7421760cf1f1eaa8725bdb975037b06d-7160/.minikube/profiles/scheduled-stop-739289/config.json ...
	I1220 01:58:23.041212   28041 mustload.go:66] Loading cluster: scheduled-stop-739289
	I1220 01:58:23.041336   28041 config.go:182] Loaded profile config "scheduled-stop-739289": Driver=kvm2, ContainerRuntime=docker, KubernetesVersion=v1.34.3

                                                
                                                
** /stderr **
scheduled_stop_test.go:172: signal error was:  os: process already finished
scheduled_stop_test.go:218: (dbg) Run:  out/minikube-linux-amd64 status -p scheduled-stop-739289
scheduled_stop_test.go:218: (dbg) Non-zero exit: out/minikube-linux-amd64 status -p scheduled-stop-739289: exit status 7 (57.774665ms)

                                                
                                                
-- stdout --
	scheduled-stop-739289
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	

                                                
                                                
-- /stdout --
scheduled_stop_test.go:189: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p scheduled-stop-739289 -n scheduled-stop-739289
scheduled_stop_test.go:189: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p scheduled-stop-739289 -n scheduled-stop-739289: exit status 7 (58.630356ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
scheduled_stop_test.go:189: status error: exit status 7 (may be ok)
helpers_test.go:176: Cleaning up "scheduled-stop-739289" profile ...
helpers_test.go:179: (dbg) Run:  out/minikube-linux-amd64 delete -p scheduled-stop-739289
--- PASS: TestScheduledStopUnix (110.38s)

                                                
                                    
x
+
TestSkaffold (115.57s)

                                                
                                                
=== RUN   TestSkaffold
skaffold_test.go:59: (dbg) Run:  /tmp/skaffold.exe909115879 version
skaffold_test.go:63: skaffold version: v2.17.0
skaffold_test.go:66: (dbg) Run:  out/minikube-linux-amd64 start -p skaffold-733308 --memory=3072 --driver=kvm2  --container-runtime=docker
skaffold_test.go:66: (dbg) Done: out/minikube-linux-amd64 start -p skaffold-733308 --memory=3072 --driver=kvm2  --container-runtime=docker: (39.277852625s)
skaffold_test.go:86: copying out/minikube-linux-amd64 to /home/minitest/minikube/out/minikube
skaffold_test.go:105: (dbg) Run:  /tmp/skaffold.exe909115879 run --minikube-profile skaffold-733308 --kube-context skaffold-733308 --status-check=true --port-forward=false --interactive=false
E1220 01:59:57.551840   13018 cert_rotation.go:172] "Loading client cert failed" err="open /home/minitest/minikube-integration/7cd9f41b7421760cf1f1eaa8725bdb975037b06d-7160/.minikube/profiles/addons-616728/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1220 02:00:18.901427   13018 cert_rotation.go:172] "Loading client cert failed" err="open /home/minitest/minikube-integration/7cd9f41b7421760cf1f1eaa8725bdb975037b06d-7160/.minikube/profiles/functional-562123/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
skaffold_test.go:105: (dbg) Done: /tmp/skaffold.exe909115879 run --minikube-profile skaffold-733308 --kube-context skaffold-733308 --status-check=true --port-forward=false --interactive=false: (1m3.486388524s)
skaffold_test.go:111: (dbg) TestSkaffold: waiting 1m0s for pods matching "app=leeroy-app" in namespace "default" ...
helpers_test.go:353: "leeroy-app-5b778dc66c-cf7mx" [65ed294f-e014-45b0-a22e-e71a55713eb7] Running
skaffold_test.go:111: (dbg) TestSkaffold: app=leeroy-app healthy within 6.004719667s
skaffold_test.go:114: (dbg) TestSkaffold: waiting 1m0s for pods matching "app=leeroy-web" in namespace "default" ...
helpers_test.go:353: "leeroy-web-6dc469d9b9-8pkh2" [c130a977-82e1-4943-9501-18fdbec3879a] Running
skaffold_test.go:114: (dbg) TestSkaffold: app=leeroy-web healthy within 5.003646877s
helpers_test.go:176: Cleaning up "skaffold-733308" profile ...
helpers_test.go:179: (dbg) Run:  out/minikube-linux-amd64 delete -p skaffold-733308
--- PASS: TestSkaffold (115.57s)

                                                
                                    
x
+
TestRunningBinaryUpgrade (372.04s)

                                                
                                                
=== RUN   TestRunningBinaryUpgrade
=== PAUSE TestRunningBinaryUpgrade

                                                
                                                

                                                
                                                
=== CONT  TestRunningBinaryUpgrade
version_upgrade_test.go:120: (dbg) Run:  /tmp/minikube-v1.35.0.2062746071 start -p running-upgrade-413923 --memory=3072 --vm-driver=kvm2  --container-runtime=docker
E1220 02:04:38.037973   13018 cert_rotation.go:172] "Loading client cert failed" err="open /home/minitest/minikube-integration/7cd9f41b7421760cf1f1eaa8725bdb975037b06d-7160/.minikube/profiles/functional-281340/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1220 02:04:57.552386   13018 cert_rotation.go:172] "Loading client cert failed" err="open /home/minitest/minikube-integration/7cd9f41b7421760cf1f1eaa8725bdb975037b06d-7160/.minikube/profiles/addons-616728/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
version_upgrade_test.go:120: (dbg) Done: /tmp/minikube-v1.35.0.2062746071 start -p running-upgrade-413923 --memory=3072 --vm-driver=kvm2  --container-runtime=docker: (59.450440814s)
version_upgrade_test.go:130: (dbg) Run:  out/minikube-linux-amd64 start -p running-upgrade-413923 --memory=3072 --alsologtostderr -v=1 --driver=kvm2  --container-runtime=docker
E1220 02:05:52.150459   13018 cert_rotation.go:172] "Loading client cert failed" err="open /home/minitest/minikube-integration/7cd9f41b7421760cf1f1eaa8725bdb975037b06d-7160/.minikube/profiles/skaffold-733308/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1220 02:05:52.155819   13018 cert_rotation.go:172] "Loading client cert failed" err="open /home/minitest/minikube-integration/7cd9f41b7421760cf1f1eaa8725bdb975037b06d-7160/.minikube/profiles/skaffold-733308/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1220 02:05:52.166163   13018 cert_rotation.go:172] "Loading client cert failed" err="open /home/minitest/minikube-integration/7cd9f41b7421760cf1f1eaa8725bdb975037b06d-7160/.minikube/profiles/skaffold-733308/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1220 02:05:52.186563   13018 cert_rotation.go:172] "Loading client cert failed" err="open /home/minitest/minikube-integration/7cd9f41b7421760cf1f1eaa8725bdb975037b06d-7160/.minikube/profiles/skaffold-733308/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1220 02:05:52.226990   13018 cert_rotation.go:172] "Loading client cert failed" err="open /home/minitest/minikube-integration/7cd9f41b7421760cf1f1eaa8725bdb975037b06d-7160/.minikube/profiles/skaffold-733308/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1220 02:05:52.307394   13018 cert_rotation.go:172] "Loading client cert failed" err="open /home/minitest/minikube-integration/7cd9f41b7421760cf1f1eaa8725bdb975037b06d-7160/.minikube/profiles/skaffold-733308/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1220 02:05:52.467847   13018 cert_rotation.go:172] "Loading client cert failed" err="open /home/minitest/minikube-integration/7cd9f41b7421760cf1f1eaa8725bdb975037b06d-7160/.minikube/profiles/skaffold-733308/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1220 02:05:52.788284   13018 cert_rotation.go:172] "Loading client cert failed" err="open /home/minitest/minikube-integration/7cd9f41b7421760cf1f1eaa8725bdb975037b06d-7160/.minikube/profiles/skaffold-733308/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1220 02:05:53.428987   13018 cert_rotation.go:172] "Loading client cert failed" err="open /home/minitest/minikube-integration/7cd9f41b7421760cf1f1eaa8725bdb975037b06d-7160/.minikube/profiles/skaffold-733308/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1220 02:05:54.709630   13018 cert_rotation.go:172] "Loading client cert failed" err="open /home/minitest/minikube-integration/7cd9f41b7421760cf1f1eaa8725bdb975037b06d-7160/.minikube/profiles/skaffold-733308/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
version_upgrade_test.go:130: (dbg) Done: out/minikube-linux-amd64 start -p running-upgrade-413923 --memory=3072 --alsologtostderr -v=1 --driver=kvm2  --container-runtime=docker: (5m11.145792078s)
helpers_test.go:176: Cleaning up "running-upgrade-413923" profile ...
helpers_test.go:179: (dbg) Run:  out/minikube-linux-amd64 delete -p running-upgrade-413923
--- PASS: TestRunningBinaryUpgrade (372.04s)

                                                
                                    
x
+
TestKubernetesUpgrade (181.57s)

                                                
                                                
=== RUN   TestKubernetesUpgrade
=== PAUSE TestKubernetesUpgrade

                                                
                                                

                                                
                                                
=== CONT  TestKubernetesUpgrade
version_upgrade_test.go:222: (dbg) Run:  out/minikube-linux-amd64 start -p kubernetes-upgrade-082532 --memory=3072 --kubernetes-version=v1.28.0 --alsologtostderr -v=1 --driver=kvm2  --container-runtime=docker
version_upgrade_test.go:222: (dbg) Done: out/minikube-linux-amd64 start -p kubernetes-upgrade-082532 --memory=3072 --kubernetes-version=v1.28.0 --alsologtostderr -v=1 --driver=kvm2  --container-runtime=docker: (1m4.256963091s)
version_upgrade_test.go:227: (dbg) Run:  out/minikube-linux-amd64 stop -p kubernetes-upgrade-082532
version_upgrade_test.go:227: (dbg) Done: out/minikube-linux-amd64 stop -p kubernetes-upgrade-082532: (12.092111218s)
version_upgrade_test.go:232: (dbg) Run:  out/minikube-linux-amd64 -p kubernetes-upgrade-082532 status --format={{.Host}}
version_upgrade_test.go:232: (dbg) Non-zero exit: out/minikube-linux-amd64 -p kubernetes-upgrade-082532 status --format={{.Host}}: exit status 7 (72.02233ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
version_upgrade_test.go:234: status error: exit status 7 (may be ok)
version_upgrade_test.go:243: (dbg) Run:  out/minikube-linux-amd64 start -p kubernetes-upgrade-082532 --memory=3072 --kubernetes-version=v1.35.0-rc.1 --alsologtostderr -v=1 --driver=kvm2  --container-runtime=docker
version_upgrade_test.go:243: (dbg) Done: out/minikube-linux-amd64 start -p kubernetes-upgrade-082532 --memory=3072 --kubernetes-version=v1.35.0-rc.1 --alsologtostderr -v=1 --driver=kvm2  --container-runtime=docker: (49.862495785s)
version_upgrade_test.go:248: (dbg) Run:  kubectl --context kubernetes-upgrade-082532 version --output=json
version_upgrade_test.go:267: Attempting to downgrade Kubernetes (should fail)
version_upgrade_test.go:269: (dbg) Run:  out/minikube-linux-amd64 start -p kubernetes-upgrade-082532 --memory=3072 --kubernetes-version=v1.28.0 --driver=kvm2  --container-runtime=docker
version_upgrade_test.go:269: (dbg) Non-zero exit: out/minikube-linux-amd64 start -p kubernetes-upgrade-082532 --memory=3072 --kubernetes-version=v1.28.0 --driver=kvm2  --container-runtime=docker: exit status 106 (80.40073ms)

                                                
                                                
-- stdout --
	* [kubernetes-upgrade-082532] minikube v1.37.0 on Ubuntu 24.04 (kvm/amd64)
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/minitest/minikube-integration/7cd9f41b7421760cf1f1eaa8725bdb975037b06d-7160/kubeconfig
	  - MINIKUBE_HOME=/home/minitest/minikube-integration/7cd9f41b7421760cf1f1eaa8725bdb975037b06d-7160/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-amd64
	  - MINIKUBE_FORCE_SYSTEMD=
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to K8S_DOWNGRADE_UNSUPPORTED: Unable to safely downgrade existing Kubernetes v1.35.0-rc.1 cluster to v1.28.0
	* Suggestion: 
	
	    1) Recreate the cluster with Kubernetes 1.28.0, by running:
	    
	    minikube delete -p kubernetes-upgrade-082532
	    minikube start -p kubernetes-upgrade-082532 --kubernetes-version=v1.28.0
	    
	    2) Create a second cluster with Kubernetes 1.28.0, by running:
	    
	    minikube start -p kubernetes-upgrade-0825322 --kubernetes-version=v1.28.0
	    
	    3) Use the existing cluster at version Kubernetes 1.35.0-rc.1, by running:
	    
	    minikube start -p kubernetes-upgrade-082532 --kubernetes-version=v1.35.0-rc.1
	    

                                                
                                                
** /stderr **
version_upgrade_test.go:273: Attempting restart after unsuccessful downgrade
version_upgrade_test.go:275: (dbg) Run:  out/minikube-linux-amd64 start -p kubernetes-upgrade-082532 --memory=3072 --kubernetes-version=v1.35.0-rc.1 --alsologtostderr -v=1 --driver=kvm2  --container-runtime=docker
version_upgrade_test.go:275: (dbg) Done: out/minikube-linux-amd64 start -p kubernetes-upgrade-082532 --memory=3072 --kubernetes-version=v1.35.0-rc.1 --alsologtostderr -v=1 --driver=kvm2  --container-runtime=docker: (54.103133874s)
helpers_test.go:176: Cleaning up "kubernetes-upgrade-082532" profile ...
helpers_test.go:179: (dbg) Run:  out/minikube-linux-amd64 delete -p kubernetes-upgrade-082532
helpers_test.go:179: (dbg) Done: out/minikube-linux-amd64 delete -p kubernetes-upgrade-082532: (1.027577217s)
--- PASS: TestKubernetesUpgrade (181.57s)

                                                
                                    
x
+
TestNoKubernetes/serial/StartNoK8sWithVersion (0.1s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/StartNoK8sWithVersion
no_kubernetes_test.go:108: (dbg) Run:  out/minikube-linux-amd64 start -p NoKubernetes-810554 --no-kubernetes --kubernetes-version=v1.28.0 --driver=kvm2  --container-runtime=docker
no_kubernetes_test.go:108: (dbg) Non-zero exit: out/minikube-linux-amd64 start -p NoKubernetes-810554 --no-kubernetes --kubernetes-version=v1.28.0 --driver=kvm2  --container-runtime=docker: exit status 14 (100.757343ms)

                                                
                                                
-- stdout --
	* [NoKubernetes-810554] minikube v1.37.0 on Ubuntu 24.04 (kvm/amd64)
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/minitest/minikube-integration/7cd9f41b7421760cf1f1eaa8725bdb975037b06d-7160/kubeconfig
	  - MINIKUBE_HOME=/home/minitest/minikube-integration/7cd9f41b7421760cf1f1eaa8725bdb975037b06d-7160/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-amd64
	  - MINIKUBE_FORCE_SYSTEMD=
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to MK_USAGE: cannot specify --kubernetes-version with --no-kubernetes,
	to unset a global config run:
	
	$ minikube config unset kubernetes-version

                                                
                                                
** /stderr **
--- PASS: TestNoKubernetes/serial/StartNoK8sWithVersion (0.10s)

                                                
                                    
x
+
TestNoKubernetes/serial/StartWithK8s (99.59s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/StartWithK8s
no_kubernetes_test.go:120: (dbg) Run:  out/minikube-linux-amd64 start -p NoKubernetes-810554 --memory=3072 --alsologtostderr -v=5 --driver=kvm2  --container-runtime=docker
E1220 02:01:34.992605   13018 cert_rotation.go:172] "Loading client cert failed" err="open /home/minitest/minikube-integration/7cd9f41b7421760cf1f1eaa8725bdb975037b06d-7160/.minikube/profiles/functional-281340/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
no_kubernetes_test.go:120: (dbg) Done: out/minikube-linux-amd64 start -p NoKubernetes-810554 --memory=3072 --alsologtostderr -v=5 --driver=kvm2  --container-runtime=docker: (1m39.360054707s)
no_kubernetes_test.go:225: (dbg) Run:  out/minikube-linux-amd64 -p NoKubernetes-810554 status -o json
--- PASS: TestNoKubernetes/serial/StartWithK8s (99.59s)

                                                
                                    
x
+
TestNoKubernetes/serial/StartWithStopK8s (40.25s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/StartWithStopK8s
no_kubernetes_test.go:137: (dbg) Run:  out/minikube-linux-amd64 start -p NoKubernetes-810554 --no-kubernetes --memory=3072 --alsologtostderr -v=5 --driver=kvm2  --container-runtime=docker
no_kubernetes_test.go:137: (dbg) Done: out/minikube-linux-amd64 start -p NoKubernetes-810554 --no-kubernetes --memory=3072 --alsologtostderr -v=5 --driver=kvm2  --container-runtime=docker: (39.164224873s)
no_kubernetes_test.go:225: (dbg) Run:  out/minikube-linux-amd64 -p NoKubernetes-810554 status -o json
no_kubernetes_test.go:225: (dbg) Non-zero exit: out/minikube-linux-amd64 -p NoKubernetes-810554 status -o json: exit status 2 (220.373663ms)

                                                
                                                
-- stdout --
	{"Name":"NoKubernetes-810554","Host":"Running","Kubelet":"Stopped","APIServer":"Stopped","Kubeconfig":"Configured","Worker":false}

                                                
                                                
-- /stdout --
no_kubernetes_test.go:149: (dbg) Run:  out/minikube-linux-amd64 delete -p NoKubernetes-810554
--- PASS: TestNoKubernetes/serial/StartWithStopK8s (40.25s)

                                                
                                    
x
+
TestStoppedBinaryUpgrade/Setup (0.7s)

                                                
                                                
=== RUN   TestStoppedBinaryUpgrade/Setup
--- PASS: TestStoppedBinaryUpgrade/Setup (0.70s)

                                                
                                    
x
+
TestStoppedBinaryUpgrade/Upgrade (100.76s)

                                                
                                                
=== RUN   TestStoppedBinaryUpgrade/Upgrade
version_upgrade_test.go:183: (dbg) Run:  /tmp/minikube-v1.35.0.1826193378 start -p stopped-upgrade-743026 --memory=3072 --vm-driver=kvm2  --container-runtime=docker
version_upgrade_test.go:183: (dbg) Done: /tmp/minikube-v1.35.0.1826193378 start -p stopped-upgrade-743026 --memory=3072 --vm-driver=kvm2  --container-runtime=docker: (1m8.640144887s)
version_upgrade_test.go:192: (dbg) Run:  /tmp/minikube-v1.35.0.1826193378 -p stopped-upgrade-743026 stop
version_upgrade_test.go:192: (dbg) Done: /tmp/minikube-v1.35.0.1826193378 -p stopped-upgrade-743026 stop: (3.614101252s)
version_upgrade_test.go:198: (dbg) Run:  out/minikube-linux-amd64 start -p stopped-upgrade-743026 --memory=3072 --alsologtostderr -v=1 --driver=kvm2  --container-runtime=docker
version_upgrade_test.go:198: (dbg) Done: out/minikube-linux-amd64 start -p stopped-upgrade-743026 --memory=3072 --alsologtostderr -v=1 --driver=kvm2  --container-runtime=docker: (28.504413998s)
--- PASS: TestStoppedBinaryUpgrade/Upgrade (100.76s)

                                                
                                    
x
+
TestNoKubernetes/serial/Start (20.17s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/Start
no_kubernetes_test.go:161: (dbg) Run:  out/minikube-linux-amd64 start -p NoKubernetes-810554 --no-kubernetes --memory=3072 --alsologtostderr -v=5 --driver=kvm2  --container-runtime=docker
no_kubernetes_test.go:161: (dbg) Done: out/minikube-linux-amd64 start -p NoKubernetes-810554 --no-kubernetes --memory=3072 --alsologtostderr -v=5 --driver=kvm2  --container-runtime=docker: (20.168921985s)
--- PASS: TestNoKubernetes/serial/Start (20.17s)

                                                
                                    
x
+
TestNoKubernetes/serial/VerifyNok8sNoK8sDownloads (0s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/VerifyNok8sNoK8sDownloads
no_kubernetes_test.go:89: Checking cache directory: /home/minitest/minikube-integration/7cd9f41b7421760cf1f1eaa8725bdb975037b06d-7160/.minikube/cache/linux/amd64/v0.0.0
--- PASS: TestNoKubernetes/serial/VerifyNok8sNoK8sDownloads (0.00s)

                                                
                                    
x
+
TestNoKubernetes/serial/VerifyK8sNotRunning (0.16s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/VerifyK8sNotRunning
no_kubernetes_test.go:172: (dbg) Run:  out/minikube-linux-amd64 ssh -p NoKubernetes-810554 "sudo systemctl is-active --quiet service kubelet"
no_kubernetes_test.go:172: (dbg) Non-zero exit: out/minikube-linux-amd64 ssh -p NoKubernetes-810554 "sudo systemctl is-active --quiet service kubelet": exit status 1 (163.484914ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 4

                                                
                                                
** /stderr **
--- PASS: TestNoKubernetes/serial/VerifyK8sNotRunning (0.16s)

                                                
                                    
x
+
TestNoKubernetes/serial/ProfileList (4.93s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/ProfileList
no_kubernetes_test.go:194: (dbg) Run:  out/minikube-linux-amd64 profile list
no_kubernetes_test.go:194: (dbg) Done: out/minikube-linux-amd64 profile list: (4.550921972s)
no_kubernetes_test.go:204: (dbg) Run:  out/minikube-linux-amd64 profile list --output=json
--- PASS: TestNoKubernetes/serial/ProfileList (4.93s)

                                                
                                    
x
+
TestNoKubernetes/serial/Stop (1.23s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/Stop
no_kubernetes_test.go:183: (dbg) Run:  out/minikube-linux-amd64 stop -p NoKubernetes-810554
no_kubernetes_test.go:183: (dbg) Done: out/minikube-linux-amd64 stop -p NoKubernetes-810554: (1.229305333s)
--- PASS: TestNoKubernetes/serial/Stop (1.23s)

                                                
                                    
x
+
TestNoKubernetes/serial/StartNoArgs (45.35s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/StartNoArgs
no_kubernetes_test.go:216: (dbg) Run:  out/minikube-linux-amd64 start -p NoKubernetes-810554 --driver=kvm2  --container-runtime=docker
no_kubernetes_test.go:216: (dbg) Done: out/minikube-linux-amd64 start -p NoKubernetes-810554 --driver=kvm2  --container-runtime=docker: (45.347527195s)
--- PASS: TestNoKubernetes/serial/StartNoArgs (45.35s)

                                                
                                    
x
+
TestNoKubernetes/serial/VerifyK8sNotRunningSecond (0.18s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/VerifyK8sNotRunningSecond
no_kubernetes_test.go:172: (dbg) Run:  out/minikube-linux-amd64 ssh -p NoKubernetes-810554 "sudo systemctl is-active --quiet service kubelet"
no_kubernetes_test.go:172: (dbg) Non-zero exit: out/minikube-linux-amd64 ssh -p NoKubernetes-810554 "sudo systemctl is-active --quiet service kubelet": exit status 1 (180.946524ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 4

                                                
                                                
** /stderr **
--- PASS: TestNoKubernetes/serial/VerifyK8sNotRunningSecond (0.18s)

                                                
                                    
x
+
TestStoppedBinaryUpgrade/MinikubeLogs (1.17s)

                                                
                                                
=== RUN   TestStoppedBinaryUpgrade/MinikubeLogs
version_upgrade_test.go:206: (dbg) Run:  out/minikube-linux-amd64 logs -p stopped-upgrade-743026
version_upgrade_test.go:206: (dbg) Done: out/minikube-linux-amd64 logs -p stopped-upgrade-743026: (1.168537898s)
--- PASS: TestStoppedBinaryUpgrade/MinikubeLogs (1.17s)

                                                
                                    
x
+
TestPause/serial/Start (85.02s)

                                                
                                                
=== RUN   TestPause/serial/Start
pause_test.go:80: (dbg) Run:  out/minikube-linux-amd64 start -p pause-764015 --memory=3072 --install-addons=false --wait=all --driver=kvm2  --container-runtime=docker
pause_test.go:80: (dbg) Done: out/minikube-linux-amd64 start -p pause-764015 --memory=3072 --install-addons=false --wait=all --driver=kvm2  --container-runtime=docker: (1m25.022121919s)
--- PASS: TestPause/serial/Start (85.02s)

                                                
                                    
x
+
TestPreload/Start-NoPreload-PullImage (117.68s)

                                                
                                                
=== RUN   TestPreload/Start-NoPreload-PullImage
preload_test.go:49: (dbg) Run:  out/minikube-linux-amd64 start -p test-preload-260363 --memory=3072 --alsologtostderr --wait=true --preload=false --driver=kvm2  --container-runtime=docker
E1220 02:05:57.270915   13018 cert_rotation.go:172] "Loading client cert failed" err="open /home/minitest/minikube-integration/7cd9f41b7421760cf1f1eaa8725bdb975037b06d-7160/.minikube/profiles/skaffold-733308/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1220 02:06:02.391540   13018 cert_rotation.go:172] "Loading client cert failed" err="open /home/minitest/minikube-integration/7cd9f41b7421760cf1f1eaa8725bdb975037b06d-7160/.minikube/profiles/skaffold-733308/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
preload_test.go:49: (dbg) Done: out/minikube-linux-amd64 start -p test-preload-260363 --memory=3072 --alsologtostderr --wait=true --preload=false --driver=kvm2  --container-runtime=docker: (1m42.017168113s)
preload_test.go:56: (dbg) Run:  out/minikube-linux-amd64 -p test-preload-260363 image pull public.ecr.aws/docker/library/busybox:latest
preload_test.go:56: (dbg) Done: out/minikube-linux-amd64 -p test-preload-260363 image pull public.ecr.aws/docker/library/busybox:latest: (1.620805197s)
preload_test.go:62: (dbg) Run:  out/minikube-linux-amd64 stop -p test-preload-260363
preload_test.go:62: (dbg) Done: out/minikube-linux-amd64 stop -p test-preload-260363: (14.038315123s)
--- PASS: TestPreload/Start-NoPreload-PullImage (117.68s)

                                                
                                    
x
+
TestISOImage/Setup (38.96s)

                                                
                                                
=== RUN   TestISOImage/Setup
iso_test.go:47: (dbg) Run:  out/minikube-linux-amd64 start -p guest-073858 --no-kubernetes --driver=kvm2  --container-runtime=docker
E1220 02:06:12.631805   13018 cert_rotation.go:172] "Loading client cert failed" err="open /home/minitest/minikube-integration/7cd9f41b7421760cf1f1eaa8725bdb975037b06d-7160/.minikube/profiles/skaffold-733308/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1220 02:06:33.112813   13018 cert_rotation.go:172] "Loading client cert failed" err="open /home/minitest/minikube-integration/7cd9f41b7421760cf1f1eaa8725bdb975037b06d-7160/.minikube/profiles/skaffold-733308/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1220 02:06:34.991243   13018 cert_rotation.go:172] "Loading client cert failed" err="open /home/minitest/minikube-integration/7cd9f41b7421760cf1f1eaa8725bdb975037b06d-7160/.minikube/profiles/functional-281340/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
iso_test.go:47: (dbg) Done: out/minikube-linux-amd64 start -p guest-073858 --no-kubernetes --driver=kvm2  --container-runtime=docker: (38.964090207s)
--- PASS: TestISOImage/Setup (38.96s)

                                                
                                    
x
+
TestISOImage/Binaries/crictl (0.17s)

                                                
                                                
=== RUN   TestISOImage/Binaries/crictl
=== PAUSE TestISOImage/Binaries/crictl

                                                
                                                

                                                
                                                
=== CONT  TestISOImage/Binaries/crictl
iso_test.go:76: (dbg) Run:  out/minikube-linux-amd64 -p guest-073858 ssh "which crictl"
--- PASS: TestISOImage/Binaries/crictl (0.17s)

                                                
                                    
x
+
TestISOImage/Binaries/curl (0.2s)

                                                
                                                
=== RUN   TestISOImage/Binaries/curl
=== PAUSE TestISOImage/Binaries/curl

                                                
                                                

                                                
                                                
=== CONT  TestISOImage/Binaries/curl
iso_test.go:76: (dbg) Run:  out/minikube-linux-amd64 -p guest-073858 ssh "which curl"
--- PASS: TestISOImage/Binaries/curl (0.20s)

                                                
                                    
x
+
TestISOImage/Binaries/docker (0.21s)

                                                
                                                
=== RUN   TestISOImage/Binaries/docker
=== PAUSE TestISOImage/Binaries/docker

                                                
                                                

                                                
                                                
=== CONT  TestISOImage/Binaries/docker
iso_test.go:76: (dbg) Run:  out/minikube-linux-amd64 -p guest-073858 ssh "which docker"
--- PASS: TestISOImage/Binaries/docker (0.21s)

                                                
                                    
x
+
TestISOImage/Binaries/git (0.19s)

                                                
                                                
=== RUN   TestISOImage/Binaries/git
=== PAUSE TestISOImage/Binaries/git

                                                
                                                

                                                
                                                
=== CONT  TestISOImage/Binaries/git
iso_test.go:76: (dbg) Run:  out/minikube-linux-amd64 -p guest-073858 ssh "which git"
--- PASS: TestISOImage/Binaries/git (0.19s)

                                                
                                    
x
+
TestISOImage/Binaries/iptables (0.18s)

                                                
                                                
=== RUN   TestISOImage/Binaries/iptables
=== PAUSE TestISOImage/Binaries/iptables

                                                
                                                

                                                
                                                
=== CONT  TestISOImage/Binaries/iptables
iso_test.go:76: (dbg) Run:  out/minikube-linux-amd64 -p guest-073858 ssh "which iptables"
--- PASS: TestISOImage/Binaries/iptables (0.18s)

                                                
                                    
x
+
TestISOImage/Binaries/podman (0.18s)

                                                
                                                
=== RUN   TestISOImage/Binaries/podman
=== PAUSE TestISOImage/Binaries/podman

                                                
                                                

                                                
                                                
=== CONT  TestISOImage/Binaries/podman
iso_test.go:76: (dbg) Run:  out/minikube-linux-amd64 -p guest-073858 ssh "which podman"
--- PASS: TestISOImage/Binaries/podman (0.18s)

                                                
                                    
x
+
TestISOImage/Binaries/rsync (0.19s)

                                                
                                                
=== RUN   TestISOImage/Binaries/rsync
=== PAUSE TestISOImage/Binaries/rsync

                                                
                                                

                                                
                                                
=== CONT  TestISOImage/Binaries/rsync
iso_test.go:76: (dbg) Run:  out/minikube-linux-amd64 -p guest-073858 ssh "which rsync"
--- PASS: TestISOImage/Binaries/rsync (0.19s)

                                                
                                    
x
+
TestISOImage/Binaries/socat (0.2s)

                                                
                                                
=== RUN   TestISOImage/Binaries/socat
=== PAUSE TestISOImage/Binaries/socat

                                                
                                                

                                                
                                                
=== CONT  TestISOImage/Binaries/socat
iso_test.go:76: (dbg) Run:  out/minikube-linux-amd64 -p guest-073858 ssh "which socat"
--- PASS: TestISOImage/Binaries/socat (0.20s)

                                                
                                    
x
+
TestISOImage/Binaries/wget (0.18s)

                                                
                                                
=== RUN   TestISOImage/Binaries/wget
=== PAUSE TestISOImage/Binaries/wget

                                                
                                                

                                                
                                                
=== CONT  TestISOImage/Binaries/wget
iso_test.go:76: (dbg) Run:  out/minikube-linux-amd64 -p guest-073858 ssh "which wget"
--- PASS: TestISOImage/Binaries/wget (0.18s)

                                                
                                    
x
+
TestISOImage/Binaries/VBoxControl (0.18s)

                                                
                                                
=== RUN   TestISOImage/Binaries/VBoxControl
=== PAUSE TestISOImage/Binaries/VBoxControl

                                                
                                                

                                                
                                                
=== CONT  TestISOImage/Binaries/VBoxControl
iso_test.go:76: (dbg) Run:  out/minikube-linux-amd64 -p guest-073858 ssh "which VBoxControl"
--- PASS: TestISOImage/Binaries/VBoxControl (0.18s)

                                                
                                    
x
+
TestISOImage/Binaries/VBoxService (0.19s)

                                                
                                                
=== RUN   TestISOImage/Binaries/VBoxService
=== PAUSE TestISOImage/Binaries/VBoxService

                                                
                                                

                                                
                                                
=== CONT  TestISOImage/Binaries/VBoxService
iso_test.go:76: (dbg) Run:  out/minikube-linux-amd64 -p guest-073858 ssh "which VBoxService"
--- PASS: TestISOImage/Binaries/VBoxService (0.19s)

                                                
                                    
x
+
TestPause/serial/SecondStartNoReconfiguration (99.1s)

                                                
                                                
=== RUN   TestPause/serial/SecondStartNoReconfiguration
pause_test.go:92: (dbg) Run:  out/minikube-linux-amd64 start -p pause-764015 --alsologtostderr -v=1 --driver=kvm2  --container-runtime=docker
E1220 02:07:14.073735   13018 cert_rotation.go:172] "Loading client cert failed" err="open /home/minitest/minikube-integration/7cd9f41b7421760cf1f1eaa8725bdb975037b06d-7160/.minikube/profiles/skaffold-733308/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
pause_test.go:92: (dbg) Done: out/minikube-linux-amd64 start -p pause-764015 --alsologtostderr -v=1 --driver=kvm2  --container-runtime=docker: (1m39.077083263s)
--- PASS: TestPause/serial/SecondStartNoReconfiguration (99.10s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/FirstStart (95.32s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/FirstStart
start_stop_delete_test.go:184: (dbg) Run:  out/minikube-linux-amd64 start -p old-k8s-version-146675 --memory=3072 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=kvm2  --container-runtime=docker --kubernetes-version=v1.28.0
start_stop_delete_test.go:184: (dbg) Done: out/minikube-linux-amd64 start -p old-k8s-version-146675 --memory=3072 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=kvm2  --container-runtime=docker --kubernetes-version=v1.28.0: (1m35.315861949s)
--- PASS: TestStartStop/group/old-k8s-version/serial/FirstStart (95.32s)

                                                
                                    
x
+
TestPreload/Restart-With-Preload-Check-User-Image (41.98s)

                                                
                                                
=== RUN   TestPreload/Restart-With-Preload-Check-User-Image
preload_test.go:72: (dbg) Run:  out/minikube-linux-amd64 start -p test-preload-260363 --preload=true --alsologtostderr -v=1 --wait=true --driver=kvm2  --container-runtime=docker
E1220 02:08:21.947492   13018 cert_rotation.go:172] "Loading client cert failed" err="open /home/minitest/minikube-integration/7cd9f41b7421760cf1f1eaa8725bdb975037b06d-7160/.minikube/profiles/functional-562123/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
preload_test.go:72: (dbg) Done: out/minikube-linux-amd64 start -p test-preload-260363 --preload=true --alsologtostderr -v=1 --wait=true --driver=kvm2  --container-runtime=docker: (41.774061054s)
preload_test.go:77: (dbg) Run:  out/minikube-linux-amd64 -p test-preload-260363 image list
--- PASS: TestPreload/Restart-With-Preload-Check-User-Image (41.98s)

                                                
                                    
x
+
TestPause/serial/Pause (0.66s)

                                                
                                                
=== RUN   TestPause/serial/Pause
pause_test.go:110: (dbg) Run:  out/minikube-linux-amd64 pause -p pause-764015 --alsologtostderr -v=5
E1220 02:08:35.994441   13018 cert_rotation.go:172] "Loading client cert failed" err="open /home/minitest/minikube-integration/7cd9f41b7421760cf1f1eaa8725bdb975037b06d-7160/.minikube/profiles/skaffold-733308/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
--- PASS: TestPause/serial/Pause (0.66s)

                                                
                                    
x
+
TestPause/serial/VerifyStatus (0.24s)

                                                
                                                
=== RUN   TestPause/serial/VerifyStatus
status_test.go:76: (dbg) Run:  out/minikube-linux-amd64 status -p pause-764015 --output=json --layout=cluster
status_test.go:76: (dbg) Non-zero exit: out/minikube-linux-amd64 status -p pause-764015 --output=json --layout=cluster: exit status 2 (242.979381ms)

                                                
                                                
-- stdout --
	{"Name":"pause-764015","StatusCode":418,"StatusName":"Paused","Step":"Done","StepDetail":"* Paused 12 containers in: kube-system, kubernetes-dashboard, istio-operator","BinaryVersion":"v1.37.0","Components":{"kubeconfig":{"Name":"kubeconfig","StatusCode":200,"StatusName":"OK"}},"Nodes":[{"Name":"pause-764015","StatusCode":200,"StatusName":"OK","Components":{"apiserver":{"Name":"apiserver","StatusCode":418,"StatusName":"Paused"},"kubelet":{"Name":"kubelet","StatusCode":405,"StatusName":"Stopped"}}}]}

                                                
                                                
-- /stdout --
--- PASS: TestPause/serial/VerifyStatus (0.24s)

                                                
                                    
x
+
TestPause/serial/Unpause (0.62s)

                                                
                                                
=== RUN   TestPause/serial/Unpause
pause_test.go:121: (dbg) Run:  out/minikube-linux-amd64 unpause -p pause-764015 --alsologtostderr -v=5
--- PASS: TestPause/serial/Unpause (0.62s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/FirstStart (88.41s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/FirstStart
start_stop_delete_test.go:184: (dbg) Run:  out/minikube-linux-amd64 start -p no-preload-744061 --memory=3072 --alsologtostderr --wait=true --preload=false --driver=kvm2  --container-runtime=docker --kubernetes-version=v1.35.0-rc.1
start_stop_delete_test.go:184: (dbg) Done: out/minikube-linux-amd64 start -p no-preload-744061 --memory=3072 --alsologtostderr --wait=true --preload=false --driver=kvm2  --container-runtime=docker --kubernetes-version=v1.35.0-rc.1: (1m28.414549066s)
--- PASS: TestStartStop/group/no-preload/serial/FirstStart (88.41s)

                                                
                                    
x
+
TestPause/serial/PauseAgain (0.8s)

                                                
                                                
=== RUN   TestPause/serial/PauseAgain
pause_test.go:110: (dbg) Run:  out/minikube-linux-amd64 pause -p pause-764015 --alsologtostderr -v=5
--- PASS: TestPause/serial/PauseAgain (0.80s)

                                                
                                    
x
+
TestPause/serial/DeletePaused (1.1s)

                                                
                                                
=== RUN   TestPause/serial/DeletePaused
pause_test.go:132: (dbg) Run:  out/minikube-linux-amd64 delete -p pause-764015 --alsologtostderr -v=5
pause_test.go:132: (dbg) Done: out/minikube-linux-amd64 delete -p pause-764015 --alsologtostderr -v=5: (1.104433962s)
--- PASS: TestPause/serial/DeletePaused (1.10s)

                                                
                                    
x
+
TestPause/serial/VerifyDeletedResources (0.59s)

                                                
                                                
=== RUN   TestPause/serial/VerifyDeletedResources
pause_test.go:142: (dbg) Run:  out/minikube-linux-amd64 profile list --output json
--- PASS: TestPause/serial/VerifyDeletedResources (0.59s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/FirstStart (99.05s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/FirstStart
start_stop_delete_test.go:184: (dbg) Run:  out/minikube-linux-amd64 start -p embed-certs-504101 --memory=3072 --alsologtostderr --wait=true --embed-certs --driver=kvm2  --container-runtime=docker --kubernetes-version=v1.34.3
start_stop_delete_test.go:184: (dbg) Done: out/minikube-linux-amd64 start -p embed-certs-504101 --memory=3072 --alsologtostderr --wait=true --embed-certs --driver=kvm2  --container-runtime=docker --kubernetes-version=v1.34.3: (1m39.04847145s)
--- PASS: TestStartStop/group/embed-certs/serial/FirstStart (99.05s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/DeployApp (9.33s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/DeployApp
start_stop_delete_test.go:194: (dbg) Run:  kubectl --context old-k8s-version-146675 create -f testdata/busybox.yaml
start_stop_delete_test.go:194: (dbg) TestStartStop/group/old-k8s-version/serial/DeployApp: waiting 8m0s for pods matching "integration-test=busybox" in namespace "default" ...
helpers_test.go:353: "busybox" [dfa8169f-a3e6-4cc6-be5a-470e4de045a0] Pending
helpers_test.go:353: "busybox" [dfa8169f-a3e6-4cc6-be5a-470e4de045a0] Pending / Ready:ContainersNotReady (containers with unready status: [busybox]) / ContainersReady:ContainersNotReady (containers with unready status: [busybox])
helpers_test.go:353: "busybox" [dfa8169f-a3e6-4cc6-be5a-470e4de045a0] Running
start_stop_delete_test.go:194: (dbg) TestStartStop/group/old-k8s-version/serial/DeployApp: integration-test=busybox healthy within 9.004011307s
start_stop_delete_test.go:194: (dbg) Run:  kubectl --context old-k8s-version-146675 exec busybox -- /bin/sh -c "ulimit -n"
--- PASS: TestStartStop/group/old-k8s-version/serial/DeployApp (9.33s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/EnableAddonWhileActive (2.36s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/EnableAddonWhileActive
start_stop_delete_test.go:203: (dbg) Run:  out/minikube-linux-amd64 addons enable metrics-server -p old-k8s-version-146675 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain
start_stop_delete_test.go:203: (dbg) Done: out/minikube-linux-amd64 addons enable metrics-server -p old-k8s-version-146675 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain: (2.273826459s)
start_stop_delete_test.go:213: (dbg) Run:  kubectl --context old-k8s-version-146675 describe deploy/metrics-server -n kube-system
--- PASS: TestStartStop/group/old-k8s-version/serial/EnableAddonWhileActive (2.36s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/Stop (11.83s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/Stop
start_stop_delete_test.go:226: (dbg) Run:  out/minikube-linux-amd64 stop -p old-k8s-version-146675 --alsologtostderr -v=3
start_stop_delete_test.go:226: (dbg) Done: out/minikube-linux-amd64 stop -p old-k8s-version-146675 --alsologtostderr -v=3: (11.833002957s)
--- PASS: TestStartStop/group/old-k8s-version/serial/Stop (11.83s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/EnableAddonAfterStop (0.17s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/EnableAddonAfterStop
start_stop_delete_test.go:237: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p old-k8s-version-146675 -n old-k8s-version-146675
start_stop_delete_test.go:237: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p old-k8s-version-146675 -n old-k8s-version-146675: exit status 7 (75.043576ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:237: status error: exit status 7 (may be ok)
start_stop_delete_test.go:244: (dbg) Run:  out/minikube-linux-amd64 addons enable dashboard -p old-k8s-version-146675 --images=MetricsScraper=registry.k8s.io/echoserver:1.4
--- PASS: TestStartStop/group/old-k8s-version/serial/EnableAddonAfterStop (0.17s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/SecondStart (39.75s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/SecondStart
start_stop_delete_test.go:254: (dbg) Run:  out/minikube-linux-amd64 start -p old-k8s-version-146675 --memory=3072 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=kvm2  --container-runtime=docker --kubernetes-version=v1.28.0
E1220 02:09:57.551854   13018 cert_rotation.go:172] "Loading client cert failed" err="open /home/minitest/minikube-integration/7cd9f41b7421760cf1f1eaa8725bdb975037b06d-7160/.minikube/profiles/addons-616728/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
start_stop_delete_test.go:254: (dbg) Done: out/minikube-linux-amd64 start -p old-k8s-version-146675 --memory=3072 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=kvm2  --container-runtime=docker --kubernetes-version=v1.28.0: (39.433355626s)
start_stop_delete_test.go:260: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p old-k8s-version-146675 -n old-k8s-version-146675
--- PASS: TestStartStop/group/old-k8s-version/serial/SecondStart (39.75s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/DeployApp (7.31s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/DeployApp
start_stop_delete_test.go:194: (dbg) Run:  kubectl --context no-preload-744061 create -f testdata/busybox.yaml
start_stop_delete_test.go:194: (dbg) TestStartStop/group/no-preload/serial/DeployApp: waiting 8m0s for pods matching "integration-test=busybox" in namespace "default" ...
helpers_test.go:353: "busybox" [a5d7edb3-321c-410a-8725-9246f00ab21c] Pending
helpers_test.go:353: "busybox" [a5d7edb3-321c-410a-8725-9246f00ab21c] Pending / Ready:ContainersNotReady (containers with unready status: [busybox]) / ContainersReady:ContainersNotReady (containers with unready status: [busybox])
helpers_test.go:353: "busybox" [a5d7edb3-321c-410a-8725-9246f00ab21c] Running
start_stop_delete_test.go:194: (dbg) TestStartStop/group/no-preload/serial/DeployApp: integration-test=busybox healthy within 7.004797562s
start_stop_delete_test.go:194: (dbg) Run:  kubectl --context no-preload-744061 exec busybox -- /bin/sh -c "ulimit -n"
--- PASS: TestStartStop/group/no-preload/serial/DeployApp (7.31s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/EnableAddonWhileActive (1.14s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/EnableAddonWhileActive
start_stop_delete_test.go:203: (dbg) Run:  out/minikube-linux-amd64 addons enable metrics-server -p no-preload-744061 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain
start_stop_delete_test.go:203: (dbg) Done: out/minikube-linux-amd64 addons enable metrics-server -p no-preload-744061 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain: (1.044788208s)
start_stop_delete_test.go:213: (dbg) Run:  kubectl --context no-preload-744061 describe deploy/metrics-server -n kube-system
--- PASS: TestStartStop/group/no-preload/serial/EnableAddonWhileActive (1.14s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/Stop (12.73s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/Stop
start_stop_delete_test.go:226: (dbg) Run:  out/minikube-linux-amd64 stop -p no-preload-744061 --alsologtostderr -v=3
start_stop_delete_test.go:226: (dbg) Done: out/minikube-linux-amd64 stop -p no-preload-744061 --alsologtostderr -v=3: (12.731908635s)
--- PASS: TestStartStop/group/no-preload/serial/Stop (12.73s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop (13.01s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop
start_stop_delete_test.go:272: (dbg) TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:353: "kubernetes-dashboard-8694d4445c-b8lkj" [dc402626-331d-4c79-8776-4b1e3ae66e23] Pending / Ready:ContainersNotReady (containers with unready status: [kubernetes-dashboard]) / ContainersReady:ContainersNotReady (containers with unready status: [kubernetes-dashboard])
E1220 02:10:18.901314   13018 cert_rotation.go:172] "Loading client cert failed" err="open /home/minitest/minikube-integration/7cd9f41b7421760cf1f1eaa8725bdb975037b06d-7160/.minikube/profiles/functional-562123/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
helpers_test.go:353: "kubernetes-dashboard-8694d4445c-b8lkj" [dc402626-331d-4c79-8776-4b1e3ae66e23] Running
start_stop_delete_test.go:272: (dbg) TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 13.004485098s
--- PASS: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop (13.01s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/DeployApp (9.32s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/DeployApp
start_stop_delete_test.go:194: (dbg) Run:  kubectl --context embed-certs-504101 create -f testdata/busybox.yaml
start_stop_delete_test.go:194: (dbg) TestStartStop/group/embed-certs/serial/DeployApp: waiting 8m0s for pods matching "integration-test=busybox" in namespace "default" ...
helpers_test.go:353: "busybox" [10cec174-33eb-4c1b-aa91-4baeca8e8bac] Pending
helpers_test.go:353: "busybox" [10cec174-33eb-4c1b-aa91-4baeca8e8bac] Pending / Ready:ContainersNotReady (containers with unready status: [busybox]) / ContainersReady:ContainersNotReady (containers with unready status: [busybox])
helpers_test.go:353: "busybox" [10cec174-33eb-4c1b-aa91-4baeca8e8bac] Running
start_stop_delete_test.go:194: (dbg) TestStartStop/group/embed-certs/serial/DeployApp: integration-test=busybox healthy within 9.004338887s
start_stop_delete_test.go:194: (dbg) Run:  kubectl --context embed-certs-504101 exec busybox -- /bin/sh -c "ulimit -n"
--- PASS: TestStartStop/group/embed-certs/serial/DeployApp (9.32s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/EnableAddonAfterStop (0.15s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/EnableAddonAfterStop
start_stop_delete_test.go:237: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p no-preload-744061 -n no-preload-744061
start_stop_delete_test.go:237: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p no-preload-744061 -n no-preload-744061: exit status 7 (63.415477ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:237: status error: exit status 7 (may be ok)
start_stop_delete_test.go:244: (dbg) Run:  out/minikube-linux-amd64 addons enable dashboard -p no-preload-744061 --images=MetricsScraper=registry.k8s.io/echoserver:1.4
--- PASS: TestStartStop/group/no-preload/serial/EnableAddonAfterStop (0.15s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/SecondStart (42.71s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/SecondStart
start_stop_delete_test.go:254: (dbg) Run:  out/minikube-linux-amd64 start -p no-preload-744061 --memory=3072 --alsologtostderr --wait=true --preload=false --driver=kvm2  --container-runtime=docker --kubernetes-version=v1.35.0-rc.1
start_stop_delete_test.go:254: (dbg) Done: out/minikube-linux-amd64 start -p no-preload-744061 --memory=3072 --alsologtostderr --wait=true --preload=false --driver=kvm2  --container-runtime=docker --kubernetes-version=v1.35.0-rc.1: (42.456064739s)
start_stop_delete_test.go:260: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p no-preload-744061 -n no-preload-744061
--- PASS: TestStartStop/group/no-preload/serial/SecondStart (42.71s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop (5.08s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop
start_stop_delete_test.go:285: (dbg) TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:353: "kubernetes-dashboard-8694d4445c-b8lkj" [dc402626-331d-4c79-8776-4b1e3ae66e23] Running
start_stop_delete_test.go:285: (dbg) TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 5.003631385s
start_stop_delete_test.go:289: (dbg) Run:  kubectl --context old-k8s-version-146675 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard
--- PASS: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop (5.08s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/EnableAddonWhileActive (0.93s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/EnableAddonWhileActive
start_stop_delete_test.go:203: (dbg) Run:  out/minikube-linux-amd64 addons enable metrics-server -p embed-certs-504101 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain
start_stop_delete_test.go:213: (dbg) Run:  kubectl --context embed-certs-504101 describe deploy/metrics-server -n kube-system
--- PASS: TestStartStop/group/embed-certs/serial/EnableAddonWhileActive (0.93s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/Stop (11.86s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/Stop
start_stop_delete_test.go:226: (dbg) Run:  out/minikube-linux-amd64 stop -p embed-certs-504101 --alsologtostderr -v=3
start_stop_delete_test.go:226: (dbg) Done: out/minikube-linux-amd64 stop -p embed-certs-504101 --alsologtostderr -v=3: (11.86237391s)
--- PASS: TestStartStop/group/embed-certs/serial/Stop (11.86s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/VerifyKubernetesImages (0.2s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/VerifyKubernetesImages
start_stop_delete_test.go:302: (dbg) Run:  out/minikube-linux-amd64 -p old-k8s-version-146675 image list --format=json
start_stop_delete_test.go:302: Found non-minikube image: gcr.io/k8s-minikube/busybox:1.28.4-glibc
--- PASS: TestStartStop/group/old-k8s-version/serial/VerifyKubernetesImages (0.20s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/Pause (2.45s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/Pause
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-amd64 pause -p old-k8s-version-146675 --alsologtostderr -v=1
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p old-k8s-version-146675 -n old-k8s-version-146675
start_stop_delete_test.go:309: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.APIServer}} -p old-k8s-version-146675 -n old-k8s-version-146675: exit status 2 (235.225529ms)

                                                
                                                
-- stdout --
	Paused

                                                
                                                
-- /stdout --
start_stop_delete_test.go:309: status error: exit status 2 (may be ok)
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Kubelet}} -p old-k8s-version-146675 -n old-k8s-version-146675
start_stop_delete_test.go:309: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Kubelet}} -p old-k8s-version-146675 -n old-k8s-version-146675: exit status 2 (228.999265ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:309: status error: exit status 2 (may be ok)
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-amd64 unpause -p old-k8s-version-146675 --alsologtostderr -v=1
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p old-k8s-version-146675 -n old-k8s-version-146675
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Kubelet}} -p old-k8s-version-146675 -n old-k8s-version-146675
--- PASS: TestStartStop/group/old-k8s-version/serial/Pause (2.45s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/FirstStart (88.04s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/FirstStart
start_stop_delete_test.go:184: (dbg) Run:  out/minikube-linux-amd64 start -p default-k8s-diff-port-032958 --memory=3072 --alsologtostderr --wait=true --apiserver-port=8444 --driver=kvm2  --container-runtime=docker --kubernetes-version=v1.34.3
start_stop_delete_test.go:184: (dbg) Done: out/minikube-linux-amd64 start -p default-k8s-diff-port-032958 --memory=3072 --alsologtostderr --wait=true --apiserver-port=8444 --driver=kvm2  --container-runtime=docker --kubernetes-version=v1.34.3: (1m28.044064734s)
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/FirstStart (88.04s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/EnableAddonAfterStop (0.16s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/EnableAddonAfterStop
start_stop_delete_test.go:237: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p embed-certs-504101 -n embed-certs-504101
start_stop_delete_test.go:237: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p embed-certs-504101 -n embed-certs-504101: exit status 7 (70.625061ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:237: status error: exit status 7 (may be ok)
start_stop_delete_test.go:244: (dbg) Run:  out/minikube-linux-amd64 addons enable dashboard -p embed-certs-504101 --images=MetricsScraper=registry.k8s.io/echoserver:1.4
--- PASS: TestStartStop/group/embed-certs/serial/EnableAddonAfterStop (0.16s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/SecondStart (65.22s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/SecondStart
start_stop_delete_test.go:254: (dbg) Run:  out/minikube-linux-amd64 start -p embed-certs-504101 --memory=3072 --alsologtostderr --wait=true --embed-certs --driver=kvm2  --container-runtime=docker --kubernetes-version=v1.34.3
start_stop_delete_test.go:254: (dbg) Done: out/minikube-linux-amd64 start -p embed-certs-504101 --memory=3072 --alsologtostderr --wait=true --embed-certs --driver=kvm2  --container-runtime=docker --kubernetes-version=v1.34.3: (1m4.938983242s)
start_stop_delete_test.go:260: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p embed-certs-504101 -n embed-certs-504101
--- PASS: TestStartStop/group/embed-certs/serial/SecondStart (65.22s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/FirstStart (75.4s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/FirstStart
start_stop_delete_test.go:184: (dbg) Run:  out/minikube-linux-amd64 start -p newest-cni-974280 --memory=3072 --alsologtostderr --wait=apiserver,system_pods,default_sa --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=kvm2  --container-runtime=docker --kubernetes-version=v1.35.0-rc.1
E1220 02:10:52.150524   13018 cert_rotation.go:172] "Loading client cert failed" err="open /home/minitest/minikube-integration/7cd9f41b7421760cf1f1eaa8725bdb975037b06d-7160/.minikube/profiles/skaffold-733308/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
start_stop_delete_test.go:184: (dbg) Done: out/minikube-linux-amd64 start -p newest-cni-974280 --memory=3072 --alsologtostderr --wait=apiserver,system_pods,default_sa --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=kvm2  --container-runtime=docker --kubernetes-version=v1.35.0-rc.1: (1m15.396976691s)
--- PASS: TestStartStop/group/newest-cni/serial/FirstStart (75.40s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/UserAppExistsAfterStop (6.01s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/UserAppExistsAfterStop
start_stop_delete_test.go:272: (dbg) TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:353: "kubernetes-dashboard-b84665fb8-vmfrr" [c28bbc6e-8db7-4bb3-8c8f-4491eb44625e] Running
start_stop_delete_test.go:272: (dbg) TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 6.004784673s
--- PASS: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop (6.01s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/AddonExistsAfterStop (5.11s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/AddonExistsAfterStop
start_stop_delete_test.go:285: (dbg) TestStartStop/group/no-preload/serial/AddonExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:353: "kubernetes-dashboard-b84665fb8-vmfrr" [c28bbc6e-8db7-4bb3-8c8f-4491eb44625e] Running
E1220 02:11:19.835278   13018 cert_rotation.go:172] "Loading client cert failed" err="open /home/minitest/minikube-integration/7cd9f41b7421760cf1f1eaa8725bdb975037b06d-7160/.minikube/profiles/skaffold-733308/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
start_stop_delete_test.go:285: (dbg) TestStartStop/group/no-preload/serial/AddonExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 5.005011706s
start_stop_delete_test.go:289: (dbg) Run:  kubectl --context no-preload-744061 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard
--- PASS: TestStartStop/group/no-preload/serial/AddonExistsAfterStop (5.11s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/VerifyKubernetesImages (0.22s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/VerifyKubernetesImages
start_stop_delete_test.go:302: (dbg) Run:  out/minikube-linux-amd64 -p no-preload-744061 image list --format=json
start_stop_delete_test.go:302: Found non-minikube image: gcr.io/k8s-minikube/busybox:1.28.4-glibc
--- PASS: TestStartStop/group/no-preload/serial/VerifyKubernetesImages (0.22s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/Pause (2.99s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/Pause
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-amd64 pause -p no-preload-744061 --alsologtostderr -v=1
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p no-preload-744061 -n no-preload-744061
start_stop_delete_test.go:309: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.APIServer}} -p no-preload-744061 -n no-preload-744061: exit status 2 (264.417786ms)

                                                
                                                
-- stdout --
	Paused

                                                
                                                
-- /stdout --
start_stop_delete_test.go:309: status error: exit status 2 (may be ok)
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Kubelet}} -p no-preload-744061 -n no-preload-744061
start_stop_delete_test.go:309: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Kubelet}} -p no-preload-744061 -n no-preload-744061: exit status 2 (259.831366ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:309: status error: exit status 2 (may be ok)
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-amd64 unpause -p no-preload-744061 --alsologtostderr -v=1
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p no-preload-744061 -n no-preload-744061
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Kubelet}} -p no-preload-744061 -n no-preload-744061
--- PASS: TestStartStop/group/no-preload/serial/Pause (2.99s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/Start (68.49s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-amd64 start -p auto-503505 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --driver=kvm2  --container-runtime=docker
E1220 02:11:34.991805   13018 cert_rotation.go:172] "Loading client cert failed" err="open /home/minitest/minikube-integration/7cd9f41b7421760cf1f1eaa8725bdb975037b06d-7160/.minikube/profiles/functional-281340/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
net_test.go:112: (dbg) Done: out/minikube-linux-amd64 start -p auto-503505 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --driver=kvm2  --container-runtime=docker: (1m8.486959516s)
--- PASS: TestNetworkPlugins/group/auto/Start (68.49s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop (6s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop
start_stop_delete_test.go:272: (dbg) TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:353: "kubernetes-dashboard-855c9754f9-g5dtw" [a9b667c7-f8ba-4717-a9fc-8b57ffa4edd5] Running
start_stop_delete_test.go:272: (dbg) TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 6.003613976s
--- PASS: TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop (6.00s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/AddonExistsAfterStop (5.08s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/AddonExistsAfterStop
start_stop_delete_test.go:285: (dbg) TestStartStop/group/embed-certs/serial/AddonExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:353: "kubernetes-dashboard-855c9754f9-g5dtw" [a9b667c7-f8ba-4717-a9fc-8b57ffa4edd5] Running
start_stop_delete_test.go:285: (dbg) TestStartStop/group/embed-certs/serial/AddonExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 5.004491289s
start_stop_delete_test.go:289: (dbg) Run:  kubectl --context embed-certs-504101 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard
--- PASS: TestStartStop/group/embed-certs/serial/AddonExistsAfterStop (5.08s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/VerifyKubernetesImages (0.22s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/VerifyKubernetesImages
start_stop_delete_test.go:302: (dbg) Run:  out/minikube-linux-amd64 -p embed-certs-504101 image list --format=json
start_stop_delete_test.go:302: Found non-minikube image: gcr.io/k8s-minikube/busybox:1.28.4-glibc
--- PASS: TestStartStop/group/embed-certs/serial/VerifyKubernetesImages (0.22s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/Pause (2.76s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/Pause
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-amd64 pause -p embed-certs-504101 --alsologtostderr -v=1
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p embed-certs-504101 -n embed-certs-504101
start_stop_delete_test.go:309: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.APIServer}} -p embed-certs-504101 -n embed-certs-504101: exit status 2 (237.8172ms)

                                                
                                                
-- stdout --
	Paused

                                                
                                                
-- /stdout --
start_stop_delete_test.go:309: status error: exit status 2 (may be ok)
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Kubelet}} -p embed-certs-504101 -n embed-certs-504101
start_stop_delete_test.go:309: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Kubelet}} -p embed-certs-504101 -n embed-certs-504101: exit status 2 (250.152994ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:309: status error: exit status 2 (may be ok)
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-amd64 unpause -p embed-certs-504101 --alsologtostderr -v=1
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p embed-certs-504101 -n embed-certs-504101
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Kubelet}} -p embed-certs-504101 -n embed-certs-504101
--- PASS: TestStartStop/group/embed-certs/serial/Pause (2.76s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/Start (74.47s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-amd64 start -p kindnet-503505 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=kindnet --driver=kvm2  --container-runtime=docker
net_test.go:112: (dbg) Done: out/minikube-linux-amd64 start -p kindnet-503505 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=kindnet --driver=kvm2  --container-runtime=docker: (1m14.46892153s)
--- PASS: TestNetworkPlugins/group/kindnet/Start (74.47s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/DeployApp (0s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/DeployApp
--- PASS: TestStartStop/group/newest-cni/serial/DeployApp (0.00s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/EnableAddonWhileActive (1.53s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/EnableAddonWhileActive
start_stop_delete_test.go:203: (dbg) Run:  out/minikube-linux-amd64 addons enable metrics-server -p newest-cni-974280 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain
start_stop_delete_test.go:203: (dbg) Done: out/minikube-linux-amd64 addons enable metrics-server -p newest-cni-974280 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain: (1.532896997s)
start_stop_delete_test.go:209: WARNING: cni mode requires additional setup before pods can schedule :(
--- PASS: TestStartStop/group/newest-cni/serial/EnableAddonWhileActive (1.53s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/DeployApp (9.4s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/DeployApp
start_stop_delete_test.go:194: (dbg) Run:  kubectl --context default-k8s-diff-port-032958 create -f testdata/busybox.yaml
start_stop_delete_test.go:194: (dbg) TestStartStop/group/default-k8s-diff-port/serial/DeployApp: waiting 8m0s for pods matching "integration-test=busybox" in namespace "default" ...
helpers_test.go:353: "busybox" [b24c8b89-716a-4914-b5cb-c77c24d0644b] Pending
helpers_test.go:353: "busybox" [b24c8b89-716a-4914-b5cb-c77c24d0644b] Pending / Ready:ContainersNotReady (containers with unready status: [busybox]) / ContainersReady:ContainersNotReady (containers with unready status: [busybox])
helpers_test.go:353: "busybox" [b24c8b89-716a-4914-b5cb-c77c24d0644b] Running
start_stop_delete_test.go:194: (dbg) TestStartStop/group/default-k8s-diff-port/serial/DeployApp: integration-test=busybox healthy within 9.005973617s
start_stop_delete_test.go:194: (dbg) Run:  kubectl --context default-k8s-diff-port-032958 exec busybox -- /bin/sh -c "ulimit -n"
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/DeployApp (9.40s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/Stop (12.01s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/Stop
start_stop_delete_test.go:226: (dbg) Run:  out/minikube-linux-amd64 stop -p newest-cni-974280 --alsologtostderr -v=3
start_stop_delete_test.go:226: (dbg) Done: out/minikube-linux-amd64 stop -p newest-cni-974280 --alsologtostderr -v=3: (12.012875267s)
--- PASS: TestStartStop/group/newest-cni/serial/Stop (12.01s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/EnableAddonWhileActive (1.01s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/EnableAddonWhileActive
start_stop_delete_test.go:203: (dbg) Run:  out/minikube-linux-amd64 addons enable metrics-server -p default-k8s-diff-port-032958 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain
start_stop_delete_test.go:213: (dbg) Run:  kubectl --context default-k8s-diff-port-032958 describe deploy/metrics-server -n kube-system
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/EnableAddonWhileActive (1.01s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/Stop (14.88s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/Stop
start_stop_delete_test.go:226: (dbg) Run:  out/minikube-linux-amd64 stop -p default-k8s-diff-port-032958 --alsologtostderr -v=3
start_stop_delete_test.go:226: (dbg) Done: out/minikube-linux-amd64 stop -p default-k8s-diff-port-032958 --alsologtostderr -v=3: (14.880493986s)
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/Stop (14.88s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/EnableAddonAfterStop (0.17s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/EnableAddonAfterStop
start_stop_delete_test.go:237: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p newest-cni-974280 -n newest-cni-974280
start_stop_delete_test.go:237: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p newest-cni-974280 -n newest-cni-974280: exit status 7 (69.737632ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:237: status error: exit status 7 (may be ok)
start_stop_delete_test.go:244: (dbg) Run:  out/minikube-linux-amd64 addons enable dashboard -p newest-cni-974280 --images=MetricsScraper=registry.k8s.io/echoserver:1.4
--- PASS: TestStartStop/group/newest-cni/serial/EnableAddonAfterStop (0.17s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/SecondStart (40.12s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/SecondStart
start_stop_delete_test.go:254: (dbg) Run:  out/minikube-linux-amd64 start -p newest-cni-974280 --memory=3072 --alsologtostderr --wait=apiserver,system_pods,default_sa --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=kvm2  --container-runtime=docker --kubernetes-version=v1.35.0-rc.1
start_stop_delete_test.go:254: (dbg) Done: out/minikube-linux-amd64 start -p newest-cni-974280 --memory=3072 --alsologtostderr --wait=apiserver,system_pods,default_sa --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=kvm2  --container-runtime=docker --kubernetes-version=v1.35.0-rc.1: (39.753007122s)
start_stop_delete_test.go:260: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p newest-cni-974280 -n newest-cni-974280
--- PASS: TestStartStop/group/newest-cni/serial/SecondStart (40.12s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/EnableAddonAfterStop (0.21s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/EnableAddonAfterStop
start_stop_delete_test.go:237: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p default-k8s-diff-port-032958 -n default-k8s-diff-port-032958
start_stop_delete_test.go:237: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p default-k8s-diff-port-032958 -n default-k8s-diff-port-032958: exit status 7 (96.979583ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:237: status error: exit status 7 (may be ok)
start_stop_delete_test.go:244: (dbg) Run:  out/minikube-linux-amd64 addons enable dashboard -p default-k8s-diff-port-032958 --images=MetricsScraper=registry.k8s.io/echoserver:1.4
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/EnableAddonAfterStop (0.21s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/SecondStart (60.76s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/SecondStart
start_stop_delete_test.go:254: (dbg) Run:  out/minikube-linux-amd64 start -p default-k8s-diff-port-032958 --memory=3072 --alsologtostderr --wait=true --apiserver-port=8444 --driver=kvm2  --container-runtime=docker --kubernetes-version=v1.34.3
start_stop_delete_test.go:254: (dbg) Done: out/minikube-linux-amd64 start -p default-k8s-diff-port-032958 --memory=3072 --alsologtostderr --wait=true --apiserver-port=8444 --driver=kvm2  --container-runtime=docker --kubernetes-version=v1.34.3: (1m0.467578075s)
start_stop_delete_test.go:260: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p default-k8s-diff-port-032958 -n default-k8s-diff-port-032958
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/SecondStart (60.76s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/KubeletFlags (0.17s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-amd64 ssh -p auto-503505 "pgrep -a kubelet"
I1220 02:12:36.026988   13018 config.go:182] Loaded profile config "auto-503505": Driver=kvm2, ContainerRuntime=docker, KubernetesVersion=v1.34.3
--- PASS: TestNetworkPlugins/group/auto/KubeletFlags (0.17s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/NetCatPod (10.26s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context auto-503505 replace --force -f testdata/netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/auto/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:353: "netcat-cd4db9dbf-7kpnq" [71dd67e0-5d6c-4606-a9fd-923e99a182c4] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
helpers_test.go:353: "netcat-cd4db9dbf-7kpnq" [71dd67e0-5d6c-4606-a9fd-923e99a182c4] Running
net_test.go:163: (dbg) TestNetworkPlugins/group/auto/NetCatPod: app=netcat healthy within 10.004827878s
--- PASS: TestNetworkPlugins/group/auto/NetCatPod (10.26s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/DNS (0.21s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/DNS
net_test.go:175: (dbg) Run:  kubectl --context auto-503505 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/auto/DNS (0.21s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/Localhost (0.17s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/Localhost
net_test.go:194: (dbg) Run:  kubectl --context auto-503505 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/auto/Localhost (0.17s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/HairPin (0.15s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/HairPin
net_test.go:264: (dbg) Run:  kubectl --context auto-503505 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/auto/HairPin (0.15s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/UserAppExistsAfterStop (0s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/UserAppExistsAfterStop
start_stop_delete_test.go:271: WARNING: cni mode requires additional setup before pods can schedule :(
--- PASS: TestStartStop/group/newest-cni/serial/UserAppExistsAfterStop (0.00s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/AddonExistsAfterStop (0s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/AddonExistsAfterStop
start_stop_delete_test.go:282: WARNING: cni mode requires additional setup before pods can schedule :(
--- PASS: TestStartStop/group/newest-cni/serial/AddonExistsAfterStop (0.00s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/VerifyKubernetesImages (0.26s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/VerifyKubernetesImages
start_stop_delete_test.go:302: (dbg) Run:  out/minikube-linux-amd64 -p newest-cni-974280 image list --format=json
--- PASS: TestStartStop/group/newest-cni/serial/VerifyKubernetesImages (0.26s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/Pause (3.34s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/Pause
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-amd64 pause -p newest-cni-974280 --alsologtostderr -v=1
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p newest-cni-974280 -n newest-cni-974280
start_stop_delete_test.go:309: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.APIServer}} -p newest-cni-974280 -n newest-cni-974280: exit status 2 (313.483917ms)

                                                
                                                
-- stdout --
	Paused

                                                
                                                
-- /stdout --
start_stop_delete_test.go:309: status error: exit status 2 (may be ok)
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Kubelet}} -p newest-cni-974280 -n newest-cni-974280
start_stop_delete_test.go:309: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Kubelet}} -p newest-cni-974280 -n newest-cni-974280: exit status 2 (309.259324ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:309: status error: exit status 2 (may be ok)
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-amd64 unpause -p newest-cni-974280 --alsologtostderr -v=1
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p newest-cni-974280 -n newest-cni-974280
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Kubelet}} -p newest-cni-974280 -n newest-cni-974280
--- PASS: TestStartStop/group/newest-cni/serial/Pause (3.34s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/Start (92.46s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-amd64 start -p calico-503505 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=calico --driver=kvm2  --container-runtime=docker
net_test.go:112: (dbg) Done: out/minikube-linux-amd64 start -p calico-503505 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=calico --driver=kvm2  --container-runtime=docker: (1m32.460011804s)
--- PASS: TestNetworkPlugins/group/calico/Start (92.46s)

                                                
                                    
x
+
TestNetworkPlugins/group/custom-flannel/Start (80.94s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/custom-flannel/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-amd64 start -p custom-flannel-503505 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=testdata/kube-flannel.yaml --driver=kvm2  --container-runtime=docker
net_test.go:112: (dbg) Done: out/minikube-linux-amd64 start -p custom-flannel-503505 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=testdata/kube-flannel.yaml --driver=kvm2  --container-runtime=docker: (1m20.937056554s)
--- PASS: TestNetworkPlugins/group/custom-flannel/Start (80.94s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/ControllerPod (6.01s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/ControllerPod
net_test.go:120: (dbg) TestNetworkPlugins/group/kindnet/ControllerPod: waiting 10m0s for pods matching "app=kindnet" in namespace "kube-system" ...
helpers_test.go:353: "kindnet-26cpx" [d60da203-587a-4a63-973c-f48e0dc6a605] Running
net_test.go:120: (dbg) TestNetworkPlugins/group/kindnet/ControllerPod: app=kindnet healthy within 6.004176112s
--- PASS: TestNetworkPlugins/group/kindnet/ControllerPod (6.01s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/KubeletFlags (0.19s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-amd64 ssh -p kindnet-503505 "pgrep -a kubelet"
I1220 02:13:22.454386   13018 config.go:182] Loaded profile config "kindnet-503505": Driver=kvm2, ContainerRuntime=docker, KubernetesVersion=v1.34.3
--- PASS: TestNetworkPlugins/group/kindnet/KubeletFlags (0.19s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/NetCatPod (14.27s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context kindnet-503505 replace --force -f testdata/netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/kindnet/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:353: "netcat-cd4db9dbf-mkncf" [da206635-c42b-41dd-b411-cb1894bc3fe9] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
helpers_test.go:353: "netcat-cd4db9dbf-mkncf" [da206635-c42b-41dd-b411-cb1894bc3fe9] Running
net_test.go:163: (dbg) TestNetworkPlugins/group/kindnet/NetCatPod: app=netcat healthy within 14.005496537s
--- PASS: TestNetworkPlugins/group/kindnet/NetCatPod (14.27s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop (9s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop
start_stop_delete_test.go:272: (dbg) TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:353: "kubernetes-dashboard-855c9754f9-v5f62" [9881be4b-cb1e-4a7f-b3a4-15234c0b3c78] Pending / Ready:ContainersNotReady (containers with unready status: [kubernetes-dashboard]) / ContainersReady:ContainersNotReady (containers with unready status: [kubernetes-dashboard])
helpers_test.go:353: "kubernetes-dashboard-855c9754f9-v5f62" [9881be4b-cb1e-4a7f-b3a4-15234c0b3c78] Running
start_stop_delete_test.go:272: (dbg) TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 9.003648617s
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop (9.00s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/DNS (0.19s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/DNS
net_test.go:175: (dbg) Run:  kubectl --context kindnet-503505 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/kindnet/DNS (0.19s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/Localhost (0.15s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/Localhost
net_test.go:194: (dbg) Run:  kubectl --context kindnet-503505 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/kindnet/Localhost (0.15s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/HairPin (0.16s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/HairPin
net_test.go:264: (dbg) Run:  kubectl --context kindnet-503505 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/kindnet/HairPin (0.16s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop (5.12s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop
start_stop_delete_test.go:285: (dbg) TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:353: "kubernetes-dashboard-855c9754f9-v5f62" [9881be4b-cb1e-4a7f-b3a4-15234c0b3c78] Running
start_stop_delete_test.go:285: (dbg) TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 5.005984694s
start_stop_delete_test.go:289: (dbg) Run:  kubectl --context default-k8s-diff-port-032958 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop (5.12s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/VerifyKubernetesImages (0.24s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/VerifyKubernetesImages
start_stop_delete_test.go:302: (dbg) Run:  out/minikube-linux-amd64 -p default-k8s-diff-port-032958 image list --format=json
start_stop_delete_test.go:302: Found non-minikube image: gcr.io/k8s-minikube/busybox:1.28.4-glibc
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/VerifyKubernetesImages (0.24s)

                                                
                                    
x
+
TestNetworkPlugins/group/false/Start (92.23s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/false/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-amd64 start -p false-503505 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=false --driver=kvm2  --container-runtime=docker
net_test.go:112: (dbg) Done: out/minikube-linux-amd64 start -p false-503505 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=false --driver=kvm2  --container-runtime=docker: (1m32.22879038s)
--- PASS: TestNetworkPlugins/group/false/Start (92.23s)

                                                
                                    
x
+
TestNetworkPlugins/group/custom-flannel/KubeletFlags (0.22s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/custom-flannel/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-amd64 ssh -p custom-flannel-503505 "pgrep -a kubelet"
--- PASS: TestNetworkPlugins/group/custom-flannel/KubeletFlags (0.22s)

                                                
                                    
x
+
TestNetworkPlugins/group/custom-flannel/NetCatPod (12.29s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/custom-flannel/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context custom-flannel-503505 replace --force -f testdata/netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/custom-flannel/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:353: "netcat-cd4db9dbf-b9vnf" [d47a0ed6-9c72-48aa-b815-11a5c72a3288] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
helpers_test.go:353: "netcat-cd4db9dbf-b9vnf" [d47a0ed6-9c72-48aa-b815-11a5c72a3288] Running
E1220 02:14:31.598744   13018 cert_rotation.go:172] "Loading client cert failed" err="open /home/minitest/minikube-integration/7cd9f41b7421760cf1f1eaa8725bdb975037b06d-7160/.minikube/profiles/old-k8s-version-146675/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
net_test.go:163: (dbg) TestNetworkPlugins/group/custom-flannel/NetCatPod: app=netcat healthy within 12.007866703s
--- PASS: TestNetworkPlugins/group/custom-flannel/NetCatPod (12.29s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/Start (87.94s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-amd64 start -p enable-default-cni-503505 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --enable-default-cni=true --driver=kvm2  --container-runtime=docker
net_test.go:112: (dbg) Done: out/minikube-linux-amd64 start -p enable-default-cni-503505 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --enable-default-cni=true --driver=kvm2  --container-runtime=docker: (1m27.937542675s)
--- PASS: TestNetworkPlugins/group/enable-default-cni/Start (87.94s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/ControllerPod (6.01s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/ControllerPod
net_test.go:120: (dbg) TestNetworkPlugins/group/calico/ControllerPod: waiting 10m0s for pods matching "k8s-app=calico-node" in namespace "kube-system" ...
helpers_test.go:353: "calico-node-d2xvk" [ebf1dd8e-ce6b-4cf7-a858-755e70edd320] Running
net_test.go:120: (dbg) TestNetworkPlugins/group/calico/ControllerPod: k8s-app=calico-node healthy within 6.004755146s
--- PASS: TestNetworkPlugins/group/calico/ControllerPod (6.01s)

                                                
                                    
x
+
TestNetworkPlugins/group/custom-flannel/DNS (0.17s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/custom-flannel/DNS
net_test.go:175: (dbg) Run:  kubectl --context custom-flannel-503505 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/custom-flannel/DNS (0.17s)

                                                
                                    
x
+
TestNetworkPlugins/group/custom-flannel/Localhost (0.14s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/custom-flannel/Localhost
net_test.go:194: (dbg) Run:  kubectl --context custom-flannel-503505 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/custom-flannel/Localhost (0.14s)

                                                
                                    
x
+
TestNetworkPlugins/group/custom-flannel/HairPin (0.17s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/custom-flannel/HairPin
net_test.go:264: (dbg) Run:  kubectl --context custom-flannel-503505 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/custom-flannel/HairPin (0.17s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/KubeletFlags (0.2s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-amd64 ssh -p calico-503505 "pgrep -a kubelet"
I1220 02:14:41.298667   13018 config.go:182] Loaded profile config "calico-503505": Driver=kvm2, ContainerRuntime=docker, KubernetesVersion=v1.34.3
--- PASS: TestNetworkPlugins/group/calico/KubeletFlags (0.20s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/NetCatPod (12.28s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context calico-503505 replace --force -f testdata/netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/calico/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:353: "netcat-cd4db9dbf-krb9t" [e51df5ab-99a0-45f0-812f-4c21aad17503] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
helpers_test.go:353: "netcat-cd4db9dbf-krb9t" [e51df5ab-99a0-45f0-812f-4c21aad17503] Running
net_test.go:163: (dbg) TestNetworkPlugins/group/calico/NetCatPod: app=netcat healthy within 12.004576499s
--- PASS: TestNetworkPlugins/group/calico/NetCatPod (12.28s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/Start (64.8s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-amd64 start -p flannel-503505 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=flannel --driver=kvm2  --container-runtime=docker
net_test.go:112: (dbg) Done: out/minikube-linux-amd64 start -p flannel-503505 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=flannel --driver=kvm2  --container-runtime=docker: (1m4.799090027s)
--- PASS: TestNetworkPlugins/group/flannel/Start (64.80s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/DNS (0.21s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/DNS
net_test.go:175: (dbg) Run:  kubectl --context calico-503505 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/calico/DNS (0.21s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/Localhost (0.18s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/Localhost
net_test.go:194: (dbg) Run:  kubectl --context calico-503505 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/calico/Localhost (0.18s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/HairPin (0.16s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/HairPin
net_test.go:264: (dbg) Run:  kubectl --context calico-503505 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/calico/HairPin (0.16s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/Start (90.88s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-amd64 start -p bridge-503505 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=bridge --driver=kvm2  --container-runtime=docker
E1220 02:15:10.966396   13018 cert_rotation.go:172] "Loading client cert failed" err="open /home/minitest/minikube-integration/7cd9f41b7421760cf1f1eaa8725bdb975037b06d-7160/.minikube/profiles/no-preload-744061/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1220 02:15:16.086669   13018 cert_rotation.go:172] "Loading client cert failed" err="open /home/minitest/minikube-integration/7cd9f41b7421760cf1f1eaa8725bdb975037b06d-7160/.minikube/profiles/no-preload-744061/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1220 02:15:18.901404   13018 cert_rotation.go:172] "Loading client cert failed" err="open /home/minitest/minikube-integration/7cd9f41b7421760cf1f1eaa8725bdb975037b06d-7160/.minikube/profiles/functional-562123/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
net_test.go:112: (dbg) Done: out/minikube-linux-amd64 start -p bridge-503505 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=bridge --driver=kvm2  --container-runtime=docker: (1m30.88299724s)
--- PASS: TestNetworkPlugins/group/bridge/Start (90.88s)

                                                
                                    
x
+
TestNetworkPlugins/group/false/KubeletFlags (0.2s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/false/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-amd64 ssh -p false-503505 "pgrep -a kubelet"
I1220 02:15:26.013150   13018 config.go:182] Loaded profile config "false-503505": Driver=kvm2, ContainerRuntime=docker, KubernetesVersion=v1.34.3
--- PASS: TestNetworkPlugins/group/false/KubeletFlags (0.20s)

                                                
                                    
x
+
TestNetworkPlugins/group/false/NetCatPod (11.28s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/false/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context false-503505 replace --force -f testdata/netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/false/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:353: "netcat-cd4db9dbf-vxvhx" [f59a18c2-54ef-448d-bf86-8295475a55a2] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
E1220 02:15:26.327367   13018 cert_rotation.go:172] "Loading client cert failed" err="open /home/minitest/minikube-integration/7cd9f41b7421760cf1f1eaa8725bdb975037b06d-7160/.minikube/profiles/no-preload-744061/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
helpers_test.go:353: "netcat-cd4db9dbf-vxvhx" [f59a18c2-54ef-448d-bf86-8295475a55a2] Running
E1220 02:15:33.039810   13018 cert_rotation.go:172] "Loading client cert failed" err="open /home/minitest/minikube-integration/7cd9f41b7421760cf1f1eaa8725bdb975037b06d-7160/.minikube/profiles/old-k8s-version-146675/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
net_test.go:163: (dbg) TestNetworkPlugins/group/false/NetCatPod: app=netcat healthy within 11.00433152s
--- PASS: TestNetworkPlugins/group/false/NetCatPod (11.28s)

                                                
                                    
x
+
TestNetworkPlugins/group/false/DNS (0.2s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/false/DNS
net_test.go:175: (dbg) Run:  kubectl --context false-503505 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/false/DNS (0.20s)

                                                
                                    
x
+
TestNetworkPlugins/group/false/Localhost (0.15s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/false/Localhost
net_test.go:194: (dbg) Run:  kubectl --context false-503505 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/false/Localhost (0.15s)

                                                
                                    
x
+
TestNetworkPlugins/group/false/HairPin (0.2s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/false/HairPin
net_test.go:264: (dbg) Run:  kubectl --context false-503505 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/false/HairPin (0.20s)

                                                
                                    
x
+
TestNetworkPlugins/group/kubenet/Start (87.17s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kubenet/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-amd64 start -p kubenet-503505 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --network-plugin=kubenet --driver=kvm2  --container-runtime=docker
net_test.go:112: (dbg) Done: out/minikube-linux-amd64 start -p kubenet-503505 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --network-plugin=kubenet --driver=kvm2  --container-runtime=docker: (1m27.169975222s)
--- PASS: TestNetworkPlugins/group/kubenet/Start (87.17s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/KubeletFlags (0.22s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-amd64 ssh -p enable-default-cni-503505 "pgrep -a kubelet"
I1220 02:15:56.025660   13018 config.go:182] Loaded profile config "enable-default-cni-503505": Driver=kvm2, ContainerRuntime=docker, KubernetesVersion=v1.34.3
--- PASS: TestNetworkPlugins/group/enable-default-cni/KubeletFlags (0.22s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/NetCatPod (11.28s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context enable-default-cni-503505 replace --force -f testdata/netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/enable-default-cni/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:353: "netcat-cd4db9dbf-x7675" [8bf4d1a1-e082-4caa-bc9e-4311819304de] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
helpers_test.go:353: "netcat-cd4db9dbf-x7675" [8bf4d1a1-e082-4caa-bc9e-4311819304de] Running
net_test.go:163: (dbg) TestNetworkPlugins/group/enable-default-cni/NetCatPod: app=netcat healthy within 11.004801137s
--- PASS: TestNetworkPlugins/group/enable-default-cni/NetCatPod (11.28s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/ControllerPod (6.01s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/ControllerPod
net_test.go:120: (dbg) TestNetworkPlugins/group/flannel/ControllerPod: waiting 10m0s for pods matching "app=flannel" in namespace "kube-flannel" ...
helpers_test.go:353: "kube-flannel-ds-45qlj" [327d5a94-ae8b-46c1-9da2-e8ef80ad1709] Running
net_test.go:120: (dbg) TestNetworkPlugins/group/flannel/ControllerPod: app=flannel healthy within 6.006597574s
--- PASS: TestNetworkPlugins/group/flannel/ControllerPod (6.01s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/KubeletFlags (0.22s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-amd64 ssh -p flannel-503505 "pgrep -a kubelet"
I1220 02:16:03.838923   13018 config.go:182] Loaded profile config "flannel-503505": Driver=kvm2, ContainerRuntime=docker, KubernetesVersion=v1.34.3
--- PASS: TestNetworkPlugins/group/flannel/KubeletFlags (0.22s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/NetCatPod (11.3s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context flannel-503505 replace --force -f testdata/netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/flannel/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:353: "netcat-cd4db9dbf-2gwkn" [85597347-f109-46a1-b7ef-3618e05cd123] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
helpers_test.go:353: "netcat-cd4db9dbf-2gwkn" [85597347-f109-46a1-b7ef-3618e05cd123] Running
net_test.go:163: (dbg) TestNetworkPlugins/group/flannel/NetCatPod: app=netcat healthy within 11.004921169s
--- PASS: TestNetworkPlugins/group/flannel/NetCatPod (11.30s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/DNS (0.16s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/DNS
net_test.go:175: (dbg) Run:  kubectl --context enable-default-cni-503505 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/enable-default-cni/DNS (0.16s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/Localhost (0.14s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/Localhost
net_test.go:194: (dbg) Run:  kubectl --context enable-default-cni-503505 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/enable-default-cni/Localhost (0.14s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/HairPin (0.15s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/HairPin
net_test.go:264: (dbg) Run:  kubectl --context enable-default-cni-503505 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/enable-default-cni/HairPin (0.15s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/DNS (0.2s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/DNS
net_test.go:175: (dbg) Run:  kubectl --context flannel-503505 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/flannel/DNS (0.20s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/Localhost (0.16s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/Localhost
net_test.go:194: (dbg) Run:  kubectl --context flannel-503505 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/flannel/Localhost (0.16s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/HairPin (0.18s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/HairPin
net_test.go:264: (dbg) Run:  kubectl --context flannel-503505 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/flannel/HairPin (0.18s)

                                                
                                    
x
+
TestPreload/PreloadSrc/gcs (3.63s)

                                                
                                                
=== RUN   TestPreload/PreloadSrc/gcs
preload_test.go:110: (dbg) Run:  out/minikube-linux-amd64 start -p test-preload-dl-gcs-678507 --download-only --kubernetes-version v1.34.0-rc.1 --preload-src=gcs --alsologtostderr --v=1 --driver=kvm2  --container-runtime=docker
preload_test.go:110: (dbg) Done: out/minikube-linux-amd64 start -p test-preload-dl-gcs-678507 --download-only --kubernetes-version v1.34.0-rc.1 --preload-src=gcs --alsologtostderr --v=1 --driver=kvm2  --container-runtime=docker: (3.48664979s)
helpers_test.go:176: Cleaning up "test-preload-dl-gcs-678507" profile ...
helpers_test.go:179: (dbg) Run:  out/minikube-linux-amd64 delete -p test-preload-dl-gcs-678507
--- PASS: TestPreload/PreloadSrc/gcs (3.63s)

                                                
                                    
x
+
TestPreload/PreloadSrc/github (5.67s)

                                                
                                                
=== RUN   TestPreload/PreloadSrc/github
preload_test.go:110: (dbg) Run:  out/minikube-linux-amd64 start -p test-preload-dl-github-605201 --download-only --kubernetes-version v1.34.0-rc.2 --preload-src=github --alsologtostderr --v=1 --driver=kvm2  --container-runtime=docker
E1220 02:16:27.768756   13018 cert_rotation.go:172] "Loading client cert failed" err="open /home/minitest/minikube-integration/7cd9f41b7421760cf1f1eaa8725bdb975037b06d-7160/.minikube/profiles/no-preload-744061/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
preload_test.go:110: (dbg) Done: out/minikube-linux-amd64 start -p test-preload-dl-github-605201 --download-only --kubernetes-version v1.34.0-rc.2 --preload-src=github --alsologtostderr --v=1 --driver=kvm2  --container-runtime=docker: (5.515686678s)
helpers_test.go:176: Cleaning up "test-preload-dl-github-605201" profile ...
helpers_test.go:179: (dbg) Run:  out/minikube-linux-amd64 delete -p test-preload-dl-github-605201
--- PASS: TestPreload/PreloadSrc/github (5.67s)

                                                
                                    
x
+
TestISOImage/PersistentMounts//data (0.2s)

                                                
                                                
=== RUN   TestISOImage/PersistentMounts//data
=== PAUSE TestISOImage/PersistentMounts//data

                                                
                                                

                                                
                                                
=== CONT  TestISOImage/PersistentMounts//data
iso_test.go:97: (dbg) Run:  out/minikube-linux-amd64 -p guest-073858 ssh "df -t ext4 /data | grep /data"
--- PASS: TestISOImage/PersistentMounts//data (0.20s)

                                                
                                    
x
+
TestISOImage/PersistentMounts//var/lib/docker (0.19s)

                                                
                                                
=== RUN   TestISOImage/PersistentMounts//var/lib/docker
=== PAUSE TestISOImage/PersistentMounts//var/lib/docker

                                                
                                                

                                                
                                                
=== CONT  TestISOImage/PersistentMounts//var/lib/docker
iso_test.go:97: (dbg) Run:  out/minikube-linux-amd64 -p guest-073858 ssh "df -t ext4 /var/lib/docker | grep /var/lib/docker"
--- PASS: TestISOImage/PersistentMounts//var/lib/docker (0.19s)

                                                
                                    
x
+
TestISOImage/PersistentMounts//var/lib/cni (0.19s)

                                                
                                                
=== RUN   TestISOImage/PersistentMounts//var/lib/cni
=== PAUSE TestISOImage/PersistentMounts//var/lib/cni

                                                
                                                

                                                
                                                
=== CONT  TestISOImage/PersistentMounts//var/lib/cni
iso_test.go:97: (dbg) Run:  out/minikube-linux-amd64 -p guest-073858 ssh "df -t ext4 /var/lib/cni | grep /var/lib/cni"
--- PASS: TestISOImage/PersistentMounts//var/lib/cni (0.19s)

                                                
                                    
x
+
TestISOImage/PersistentMounts//var/lib/kubelet (0.18s)

                                                
                                                
=== RUN   TestISOImage/PersistentMounts//var/lib/kubelet
=== PAUSE TestISOImage/PersistentMounts//var/lib/kubelet

                                                
                                                

                                                
                                                
=== CONT  TestISOImage/PersistentMounts//var/lib/kubelet
iso_test.go:97: (dbg) Run:  out/minikube-linux-amd64 -p guest-073858 ssh "df -t ext4 /var/lib/kubelet | grep /var/lib/kubelet"
--- PASS: TestISOImage/PersistentMounts//var/lib/kubelet (0.18s)

                                                
                                    
x
+
TestISOImage/PersistentMounts//var/lib/minikube (0.19s)

                                                
                                                
=== RUN   TestISOImage/PersistentMounts//var/lib/minikube
=== PAUSE TestISOImage/PersistentMounts//var/lib/minikube

                                                
                                                

                                                
                                                
=== CONT  TestISOImage/PersistentMounts//var/lib/minikube
iso_test.go:97: (dbg) Run:  out/minikube-linux-amd64 -p guest-073858 ssh "df -t ext4 /var/lib/minikube | grep /var/lib/minikube"
--- PASS: TestISOImage/PersistentMounts//var/lib/minikube (0.19s)

                                                
                                    
x
+
TestISOImage/PersistentMounts//var/lib/toolbox (0.18s)

                                                
                                                
=== RUN   TestISOImage/PersistentMounts//var/lib/toolbox
=== PAUSE TestISOImage/PersistentMounts//var/lib/toolbox

                                                
                                                

                                                
                                                
=== CONT  TestISOImage/PersistentMounts//var/lib/toolbox
iso_test.go:97: (dbg) Run:  out/minikube-linux-amd64 -p guest-073858 ssh "df -t ext4 /var/lib/toolbox | grep /var/lib/toolbox"
--- PASS: TestISOImage/PersistentMounts//var/lib/toolbox (0.18s)

                                                
                                    
x
+
TestISOImage/PersistentMounts//var/lib/boot2docker (0.2s)

                                                
                                                
=== RUN   TestISOImage/PersistentMounts//var/lib/boot2docker
=== PAUSE TestISOImage/PersistentMounts//var/lib/boot2docker

                                                
                                                

                                                
                                                
=== CONT  TestISOImage/PersistentMounts//var/lib/boot2docker
iso_test.go:97: (dbg) Run:  out/minikube-linux-amd64 -p guest-073858 ssh "df -t ext4 /var/lib/boot2docker | grep /var/lib/boot2docker"
--- PASS: TestISOImage/PersistentMounts//var/lib/boot2docker (0.20s)

                                                
                                    
x
+
TestPreload/PreloadSrc/gcs-cached (0.3s)

                                                
                                                
=== RUN   TestPreload/PreloadSrc/gcs-cached
preload_test.go:110: (dbg) Run:  out/minikube-linux-amd64 start -p test-preload-dl-gcs-cached-804462 --download-only --kubernetes-version v1.34.0-rc.2 --preload-src=gcs --alsologtostderr --v=1 --driver=kvm2  --container-runtime=docker
helpers_test.go:176: Cleaning up "test-preload-dl-gcs-cached-804462" profile ...
helpers_test.go:179: (dbg) Run:  out/minikube-linux-amd64 delete -p test-preload-dl-gcs-cached-804462
--- PASS: TestPreload/PreloadSrc/gcs-cached (0.30s)

                                                
                                    
x
+
TestISOImage/VersionJSON (0.16s)

                                                
                                                
=== RUN   TestISOImage/VersionJSON
iso_test.go:106: (dbg) Run:  out/minikube-linux-amd64 -p guest-073858 ssh "cat /version.json"
iso_test.go:116: Successfully parsed /version.json:
iso_test.go:118:   minikube_version: v1.37.0
iso_test.go:118:   commit: c344550999bcbb78f38b2df057224788bb2d30b2
iso_test.go:118:   iso_version: v1.37.0-1765965980-22186
iso_test.go:118:   kicbase_version: v0.0.48-1765661130-22141
--- PASS: TestISOImage/VersionJSON (0.16s)

                                                
                                    
x
+
TestISOImage/eBPFSupport (0.17s)

                                                
                                                
=== RUN   TestISOImage/eBPFSupport
iso_test.go:125: (dbg) Run:  out/minikube-linux-amd64 -p guest-073858 ssh "test -f /sys/kernel/btf/vmlinux && echo 'OK' || echo 'NOT FOUND'"
--- PASS: TestISOImage/eBPFSupport (0.17s)
E1220 02:16:34.991817   13018 cert_rotation.go:172] "Loading client cert failed" err="open /home/minitest/minikube-integration/7cd9f41b7421760cf1f1eaa8725bdb975037b06d-7160/.minikube/profiles/functional-281340/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/KubeletFlags (0.18s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-amd64 ssh -p bridge-503505 "pgrep -a kubelet"
I1220 02:16:41.306105   13018 config.go:182] Loaded profile config "bridge-503505": Driver=kvm2, ContainerRuntime=docker, KubernetesVersion=v1.34.3
--- PASS: TestNetworkPlugins/group/bridge/KubeletFlags (0.18s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/NetCatPod (11.21s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context bridge-503505 replace --force -f testdata/netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/bridge/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:353: "netcat-cd4db9dbf-qkl8j" [8b17bc1b-805a-4bd8-8aed-021a0973022e] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
helpers_test.go:353: "netcat-cd4db9dbf-qkl8j" [8b17bc1b-805a-4bd8-8aed-021a0973022e] Running
net_test.go:163: (dbg) TestNetworkPlugins/group/bridge/NetCatPod: app=netcat healthy within 11.004424722s
--- PASS: TestNetworkPlugins/group/bridge/NetCatPod (11.21s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/DNS (0.15s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/DNS
net_test.go:175: (dbg) Run:  kubectl --context bridge-503505 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/bridge/DNS (0.15s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/Localhost (0.13s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/Localhost
net_test.go:194: (dbg) Run:  kubectl --context bridge-503505 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/bridge/Localhost (0.13s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/HairPin (0.13s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/HairPin
net_test.go:264: (dbg) Run:  kubectl --context bridge-503505 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/bridge/HairPin (0.13s)

                                                
                                    
x
+
TestNetworkPlugins/group/kubenet/KubeletFlags (0.17s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kubenet/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-amd64 ssh -p kubenet-503505 "pgrep -a kubelet"
I1220 02:17:20.243603   13018 config.go:182] Loaded profile config "kubenet-503505": Driver=kvm2, ContainerRuntime=docker, KubernetesVersion=v1.34.3
--- PASS: TestNetworkPlugins/group/kubenet/KubeletFlags (0.17s)

                                                
                                    
x
+
TestNetworkPlugins/group/kubenet/NetCatPod (10.23s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kubenet/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context kubenet-503505 replace --force -f testdata/netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/kubenet/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:353: "netcat-cd4db9dbf-8gpjj" [61fdf4b8-9c17-4dd1-8a3c-690acee5c5e0] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
E1220 02:17:25.212242   13018 cert_rotation.go:172] "Loading client cert failed" err="open /home/minitest/minikube-integration/7cd9f41b7421760cf1f1eaa8725bdb975037b06d-7160/.minikube/profiles/default-k8s-diff-port-032958/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
helpers_test.go:353: "netcat-cd4db9dbf-8gpjj" [61fdf4b8-9c17-4dd1-8a3c-690acee5c5e0] Running
net_test.go:163: (dbg) TestNetworkPlugins/group/kubenet/NetCatPod: app=netcat healthy within 10.004160728s
--- PASS: TestNetworkPlugins/group/kubenet/NetCatPod (10.23s)

                                                
                                    
x
+
TestNetworkPlugins/group/kubenet/DNS (0.14s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kubenet/DNS
net_test.go:175: (dbg) Run:  kubectl --context kubenet-503505 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/kubenet/DNS (0.14s)

                                                
                                    
x
+
TestNetworkPlugins/group/kubenet/Localhost (0.13s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kubenet/Localhost
net_test.go:194: (dbg) Run:  kubectl --context kubenet-503505 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/kubenet/Localhost (0.13s)

                                                
                                    
x
+
TestNetworkPlugins/group/kubenet/HairPin (0.18s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kubenet/HairPin
net_test.go:264: (dbg) Run:  kubectl --context kubenet-503505 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/kubenet/HairPin (0.18s)

                                                
                                    

Test skip (46/456)

Order skiped test Duration
5 TestDownloadOnly/v1.28.0/cached-images 0
6 TestDownloadOnly/v1.28.0/binaries 0
7 TestDownloadOnly/v1.28.0/kubectl 0
14 TestDownloadOnly/v1.34.3/cached-images 0
15 TestDownloadOnly/v1.34.3/binaries 0
16 TestDownloadOnly/v1.34.3/kubectl 0
23 TestDownloadOnly/v1.35.0-rc.1/cached-images 0
24 TestDownloadOnly/v1.35.0-rc.1/binaries 0
25 TestDownloadOnly/v1.35.0-rc.1/kubectl 0
29 TestDownloadOnlyKic 0
42 TestAddons/serial/GCPAuth/RealCredentials 0
49 TestAddons/parallel/Olm 0
56 TestAddons/parallel/AmdGpuDevicePlugin 0
63 TestDockerEnvContainerd 0
64 TestHyperKitDriverInstallOrUpdate 0
65 TestHyperkitDriverSkipUpgrade 0
117 TestFunctional/parallel/PodmanEnv 0
137 TestFunctional/parallel/TunnelCmd/serial/RunSecondTunnel 0.01
138 TestFunctional/parallel/TunnelCmd/serial/StartTunnel 0.01
139 TestFunctional/parallel/TunnelCmd/serial/WaitService 0.01
140 TestFunctional/parallel/TunnelCmd/serial/AccessDirect 0.01
141 TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDig 0.01
142 TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDscacheutil 0.01
143 TestFunctional/parallel/TunnelCmd/serial/AccessThroughDNS 0.01
144 TestFunctional/parallel/TunnelCmd/serial/DeleteTunnel 0.01
211 TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/PodmanEnv 0
228 TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/TunnelCmd/serial/RunSecondTunnel 0.01
229 TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/TunnelCmd/serial/StartTunnel 0.01
230 TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/TunnelCmd/serial/WaitService 0.01
231 TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/TunnelCmd/serial/AccessDirect 0.01
232 TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/TunnelCmd/serial/DNSResolutionByDig 0.01
233 TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/TunnelCmd/serial/DNSResolutionByDscacheutil 0.01
234 TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/TunnelCmd/serial/AccessThroughDNS 0.01
235 TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/TunnelCmd/serial/DeleteTunnel 0.01
260 TestGvisorAddon 0
289 TestImageBuild/serial/validateImageBuildWithBuildEnv 0
317 TestKicCustomNetwork 0
318 TestKicExistingNetwork 0
319 TestKicCustomSubnet 0
320 TestKicStaticIP 0
352 TestChangeNoneUser 0
355 TestScheduledStopWindows 0
359 TestInsufficientStorage 0
363 TestMissingContainerUpgrade 0
372 TestStartStop/group/disable-driver-mounts 0.18
411 TestNetworkPlugins/group/cilium 4.13
x
+
TestDownloadOnly/v1.28.0/cached-images (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.28.0/cached-images
aaa_download_only_test.go:128: Preload exists, images won't be cached
--- SKIP: TestDownloadOnly/v1.28.0/cached-images (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.28.0/binaries (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.28.0/binaries
aaa_download_only_test.go:150: Preload exists, binaries are present within.
--- SKIP: TestDownloadOnly/v1.28.0/binaries (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.28.0/kubectl (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.28.0/kubectl
aaa_download_only_test.go:166: Test for darwin and windows
--- SKIP: TestDownloadOnly/v1.28.0/kubectl (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.34.3/cached-images (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.34.3/cached-images
aaa_download_only_test.go:128: Preload exists, images won't be cached
--- SKIP: TestDownloadOnly/v1.34.3/cached-images (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.34.3/binaries (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.34.3/binaries
aaa_download_only_test.go:150: Preload exists, binaries are present within.
--- SKIP: TestDownloadOnly/v1.34.3/binaries (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.34.3/kubectl (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.34.3/kubectl
aaa_download_only_test.go:166: Test for darwin and windows
--- SKIP: TestDownloadOnly/v1.34.3/kubectl (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.35.0-rc.1/cached-images (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.35.0-rc.1/cached-images
aaa_download_only_test.go:128: Preload exists, images won't be cached
--- SKIP: TestDownloadOnly/v1.35.0-rc.1/cached-images (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.35.0-rc.1/binaries (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.35.0-rc.1/binaries
aaa_download_only_test.go:150: Preload exists, binaries are present within.
--- SKIP: TestDownloadOnly/v1.35.0-rc.1/binaries (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.35.0-rc.1/kubectl (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.35.0-rc.1/kubectl
aaa_download_only_test.go:166: Test for darwin and windows
--- SKIP: TestDownloadOnly/v1.35.0-rc.1/kubectl (0.00s)

                                                
                                    
x
+
TestDownloadOnlyKic (0s)

                                                
                                                
=== RUN   TestDownloadOnlyKic
aaa_download_only_test.go:219: skipping, only for docker or podman driver
--- SKIP: TestDownloadOnlyKic (0.00s)

                                                
                                    
x
+
TestAddons/serial/GCPAuth/RealCredentials (0s)

                                                
                                                
=== RUN   TestAddons/serial/GCPAuth/RealCredentials
addons_test.go:761: This test requires a GCE instance (excluding Cloud Shell) with a container based driver
--- SKIP: TestAddons/serial/GCPAuth/RealCredentials (0.00s)

                                                
                                    
x
+
TestAddons/parallel/Olm (0s)

                                                
                                                
=== RUN   TestAddons/parallel/Olm
=== PAUSE TestAddons/parallel/Olm

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/Olm
addons_test.go:485: Skipping OLM addon test until https://github.com/operator-framework/operator-lifecycle-manager/issues/2534 is resolved
--- SKIP: TestAddons/parallel/Olm (0.00s)

                                                
                                    
x
+
TestAddons/parallel/AmdGpuDevicePlugin (0s)

                                                
                                                
=== RUN   TestAddons/parallel/AmdGpuDevicePlugin
=== PAUSE TestAddons/parallel/AmdGpuDevicePlugin

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/AmdGpuDevicePlugin
addons_test.go:1035: skip amd gpu test on all but docker driver and amd64 platform
--- SKIP: TestAddons/parallel/AmdGpuDevicePlugin (0.00s)

                                                
                                    
x
+
TestDockerEnvContainerd (0s)

                                                
                                                
=== RUN   TestDockerEnvContainerd
docker_test.go:170: running with docker false linux amd64
docker_test.go:172: skipping: TestDockerEnvContainerd can only be run with the containerd runtime on Docker driver
--- SKIP: TestDockerEnvContainerd (0.00s)

                                                
                                    
x
+
TestHyperKitDriverInstallOrUpdate (0s)

                                                
                                                
=== RUN   TestHyperKitDriverInstallOrUpdate
driver_install_or_update_test.go:37: Skip if not darwin.
--- SKIP: TestHyperKitDriverInstallOrUpdate (0.00s)

                                                
                                    
x
+
TestHyperkitDriverSkipUpgrade (0s)

                                                
                                                
=== RUN   TestHyperkitDriverSkipUpgrade
driver_install_or_update_test.go:101: Skip if not darwin.
--- SKIP: TestHyperkitDriverSkipUpgrade (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/PodmanEnv (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/PodmanEnv
=== PAUSE TestFunctional/parallel/PodmanEnv

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/PodmanEnv
functional_test.go:565: only validate podman env with docker container runtime, currently testing docker
--- SKIP: TestFunctional/parallel/PodmanEnv (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/RunSecondTunnel (0.01s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/RunSecondTunnel
functional_test_tunnel_test.go:90: password required to execute 'route', skipping testTunnel: exit status 1
--- SKIP: TestFunctional/parallel/TunnelCmd/serial/RunSecondTunnel (0.01s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/StartTunnel (0.01s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/StartTunnel
functional_test_tunnel_test.go:90: password required to execute 'route', skipping testTunnel: exit status 1
--- SKIP: TestFunctional/parallel/TunnelCmd/serial/StartTunnel (0.01s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/WaitService (0.01s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/WaitService
functional_test_tunnel_test.go:90: password required to execute 'route', skipping testTunnel: exit status 1
--- SKIP: TestFunctional/parallel/TunnelCmd/serial/WaitService (0.01s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/AccessDirect (0.01s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/AccessDirect
functional_test_tunnel_test.go:90: password required to execute 'route', skipping testTunnel: exit status 1
--- SKIP: TestFunctional/parallel/TunnelCmd/serial/AccessDirect (0.01s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDig (0.01s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDig
functional_test_tunnel_test.go:90: password required to execute 'route', skipping testTunnel: exit status 1
--- SKIP: TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDig (0.01s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDscacheutil (0.01s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDscacheutil
functional_test_tunnel_test.go:90: password required to execute 'route', skipping testTunnel: exit status 1
--- SKIP: TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDscacheutil (0.01s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/AccessThroughDNS (0.01s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/AccessThroughDNS
functional_test_tunnel_test.go:90: password required to execute 'route', skipping testTunnel: exit status 1
--- SKIP: TestFunctional/parallel/TunnelCmd/serial/AccessThroughDNS (0.01s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/DeleteTunnel (0.01s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/DeleteTunnel
functional_test_tunnel_test.go:90: password required to execute 'route', skipping testTunnel: exit status 1
--- SKIP: TestFunctional/parallel/TunnelCmd/serial/DeleteTunnel (0.01s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/PodmanEnv (0s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/PodmanEnv
=== PAUSE TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/PodmanEnv

                                                
                                                

                                                
                                                
=== CONT  TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/PodmanEnv
functional_test.go:565: only validate podman env with docker container runtime, currently testing docker
--- SKIP: TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/PodmanEnv (0.00s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/TunnelCmd/serial/RunSecondTunnel (0.01s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/TunnelCmd/serial/RunSecondTunnel
functional_test_tunnel_test.go:90: password required to execute 'route', skipping testTunnel: exit status 1
--- SKIP: TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/TunnelCmd/serial/RunSecondTunnel (0.01s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/TunnelCmd/serial/StartTunnel (0.01s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/TunnelCmd/serial/StartTunnel
functional_test_tunnel_test.go:90: password required to execute 'route', skipping testTunnel: exit status 1
--- SKIP: TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/TunnelCmd/serial/StartTunnel (0.01s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/TunnelCmd/serial/WaitService (0.01s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/TunnelCmd/serial/WaitService
functional_test_tunnel_test.go:90: password required to execute 'route', skipping testTunnel: exit status 1
--- SKIP: TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/TunnelCmd/serial/WaitService (0.01s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/TunnelCmd/serial/AccessDirect (0.01s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/TunnelCmd/serial/AccessDirect
functional_test_tunnel_test.go:90: password required to execute 'route', skipping testTunnel: exit status 1
--- SKIP: TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/TunnelCmd/serial/AccessDirect (0.01s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/TunnelCmd/serial/DNSResolutionByDig (0.01s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/TunnelCmd/serial/DNSResolutionByDig
functional_test_tunnel_test.go:90: password required to execute 'route', skipping testTunnel: exit status 1
--- SKIP: TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/TunnelCmd/serial/DNSResolutionByDig (0.01s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/TunnelCmd/serial/DNSResolutionByDscacheutil (0.01s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/TunnelCmd/serial/DNSResolutionByDscacheutil
functional_test_tunnel_test.go:90: password required to execute 'route', skipping testTunnel: exit status 1
--- SKIP: TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/TunnelCmd/serial/DNSResolutionByDscacheutil (0.01s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/TunnelCmd/serial/AccessThroughDNS (0.01s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/TunnelCmd/serial/AccessThroughDNS
functional_test_tunnel_test.go:90: password required to execute 'route', skipping testTunnel: exit status 1
--- SKIP: TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/TunnelCmd/serial/AccessThroughDNS (0.01s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/TunnelCmd/serial/DeleteTunnel (0.01s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/TunnelCmd/serial/DeleteTunnel
functional_test_tunnel_test.go:90: password required to execute 'route', skipping testTunnel: exit status 1
--- SKIP: TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/TunnelCmd/serial/DeleteTunnel (0.01s)

                                                
                                    
x
+
TestGvisorAddon (0s)

                                                
                                                
=== RUN   TestGvisorAddon
gvisor_addon_test.go:34: skipping test because --gvisor=false
--- SKIP: TestGvisorAddon (0.00s)

                                                
                                    
x
+
TestImageBuild/serial/validateImageBuildWithBuildEnv (0s)

                                                
                                                
=== RUN   TestImageBuild/serial/validateImageBuildWithBuildEnv
image_test.go:114: skipping due to https://github.com/kubernetes/minikube/issues/12431
--- SKIP: TestImageBuild/serial/validateImageBuildWithBuildEnv (0.00s)

                                                
                                    
x
+
TestKicCustomNetwork (0s)

                                                
                                                
=== RUN   TestKicCustomNetwork
kic_custom_network_test.go:34: only runs with docker driver
--- SKIP: TestKicCustomNetwork (0.00s)

                                                
                                    
x
+
TestKicExistingNetwork (0s)

                                                
                                                
=== RUN   TestKicExistingNetwork
kic_custom_network_test.go:73: only runs with docker driver
--- SKIP: TestKicExistingNetwork (0.00s)

                                                
                                    
x
+
TestKicCustomSubnet (0s)

                                                
                                                
=== RUN   TestKicCustomSubnet
kic_custom_network_test.go:102: only runs with docker/podman driver
--- SKIP: TestKicCustomSubnet (0.00s)

                                                
                                    
x
+
TestKicStaticIP (0s)

                                                
                                                
=== RUN   TestKicStaticIP
kic_custom_network_test.go:123: only run with docker/podman driver
--- SKIP: TestKicStaticIP (0.00s)

                                                
                                    
x
+
TestChangeNoneUser (0s)

                                                
                                                
=== RUN   TestChangeNoneUser
none_test.go:38: Test requires none driver and SUDO_USER env to not be empty
--- SKIP: TestChangeNoneUser (0.00s)

                                                
                                    
x
+
TestScheduledStopWindows (0s)

                                                
                                                
=== RUN   TestScheduledStopWindows
scheduled_stop_test.go:42: test only runs on windows
--- SKIP: TestScheduledStopWindows (0.00s)

                                                
                                    
x
+
TestInsufficientStorage (0s)

                                                
                                                
=== RUN   TestInsufficientStorage
status_test.go:38: only runs with docker driver
--- SKIP: TestInsufficientStorage (0.00s)

                                                
                                    
x
+
TestMissingContainerUpgrade (0s)

                                                
                                                
=== RUN   TestMissingContainerUpgrade
version_upgrade_test.go:284: This test is only for Docker
--- SKIP: TestMissingContainerUpgrade (0.00s)

                                                
                                    
x
+
TestStartStop/group/disable-driver-mounts (0.18s)

                                                
                                                
=== RUN   TestStartStop/group/disable-driver-mounts
=== PAUSE TestStartStop/group/disable-driver-mounts

                                                
                                                

                                                
                                                
=== CONT  TestStartStop/group/disable-driver-mounts
start_stop_delete_test.go:101: skipping TestStartStop/group/disable-driver-mounts - only runs on virtualbox
helpers_test.go:176: Cleaning up "disable-driver-mounts-106701" profile ...
helpers_test.go:179: (dbg) Run:  out/minikube-linux-amd64 delete -p disable-driver-mounts-106701
--- SKIP: TestStartStop/group/disable-driver-mounts (0.18s)

                                                
                                    
x
+
TestNetworkPlugins/group/cilium (4.13s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/cilium
net_test.go:102: Skipping the test as it's interfering with other tests and is outdated
panic.go:615: 
----------------------- debugLogs start: cilium-503505 [pass: true] --------------------------------
>>> netcat: nslookup kubernetes.default:
Error in configuration: context was not found for specified context: cilium-503505

                                                
                                                

                                                
                                                
>>> netcat: nslookup debug kubernetes.default a-records:
Error in configuration: context was not found for specified context: cilium-503505

                                                
                                                

                                                
                                                
>>> netcat: dig search kubernetes.default:
Error in configuration: context was not found for specified context: cilium-503505

                                                
                                                

                                                
                                                
>>> netcat: dig @10.96.0.10 kubernetes.default.svc.cluster.local udp/53:
Error in configuration: context was not found for specified context: cilium-503505

                                                
                                                

                                                
                                                
>>> netcat: dig @10.96.0.10 kubernetes.default.svc.cluster.local tcp/53:
Error in configuration: context was not found for specified context: cilium-503505

                                                
                                                

                                                
                                                
>>> netcat: nc 10.96.0.10 udp/53:
Error in configuration: context was not found for specified context: cilium-503505

                                                
                                                

                                                
                                                
>>> netcat: nc 10.96.0.10 tcp/53:
Error in configuration: context was not found for specified context: cilium-503505

                                                
                                                

                                                
                                                
>>> netcat: /etc/nsswitch.conf:
Error in configuration: context was not found for specified context: cilium-503505

                                                
                                                

                                                
                                                
>>> netcat: /etc/hosts:
Error in configuration: context was not found for specified context: cilium-503505

                                                
                                                

                                                
                                                
>>> netcat: /etc/resolv.conf:
Error in configuration: context was not found for specified context: cilium-503505

                                                
                                                

                                                
                                                
>>> host: /etc/nsswitch.conf:
* Profile "cilium-503505" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-503505"

                                                
                                                

                                                
                                                
>>> host: /etc/hosts:
* Profile "cilium-503505" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-503505"

                                                
                                                

                                                
                                                
>>> host: /etc/resolv.conf:
* Profile "cilium-503505" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-503505"

                                                
                                                

                                                
                                                
>>> k8s: nodes, services, endpoints, daemon sets, deployments and pods, :
Error in configuration: context was not found for specified context: cilium-503505

                                                
                                                

                                                
                                                
>>> host: crictl pods:
* Profile "cilium-503505" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-503505"

                                                
                                                

                                                
                                                
>>> host: crictl containers:
* Profile "cilium-503505" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-503505"

                                                
                                                

                                                
                                                
>>> k8s: describe netcat deployment:
error: context "cilium-503505" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe netcat pod(s):
error: context "cilium-503505" does not exist

                                                
                                                

                                                
                                                
>>> k8s: netcat logs:
error: context "cilium-503505" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe coredns deployment:
error: context "cilium-503505" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe coredns pods:
error: context "cilium-503505" does not exist

                                                
                                                

                                                
                                                
>>> k8s: coredns logs:
error: context "cilium-503505" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe api server pod(s):
error: context "cilium-503505" does not exist

                                                
                                                

                                                
                                                
>>> k8s: api server logs:
error: context "cilium-503505" does not exist

                                                
                                                

                                                
                                                
>>> host: /etc/cni:
* Profile "cilium-503505" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-503505"

                                                
                                                

                                                
                                                
>>> host: ip a s:
* Profile "cilium-503505" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-503505"

                                                
                                                

                                                
                                                
>>> host: ip r s:
* Profile "cilium-503505" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-503505"

                                                
                                                

                                                
                                                
>>> host: iptables-save:
* Profile "cilium-503505" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-503505"

                                                
                                                

                                                
                                                
>>> host: iptables table nat:
* Profile "cilium-503505" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-503505"

                                                
                                                

                                                
                                                
>>> k8s: describe cilium daemon set:
Error in configuration: context was not found for specified context: cilium-503505

                                                
                                                

                                                
                                                
>>> k8s: describe cilium daemon set pod(s):
Error in configuration: context was not found for specified context: cilium-503505

                                                
                                                

                                                
                                                
>>> k8s: cilium daemon set container(s) logs (current):
error: context "cilium-503505" does not exist

                                                
                                                

                                                
                                                
>>> k8s: cilium daemon set container(s) logs (previous):
error: context "cilium-503505" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe cilium deployment:
Error in configuration: context was not found for specified context: cilium-503505

                                                
                                                

                                                
                                                
>>> k8s: describe cilium deployment pod(s):
Error in configuration: context was not found for specified context: cilium-503505

                                                
                                                

                                                
                                                
>>> k8s: cilium deployment container(s) logs (current):
error: context "cilium-503505" does not exist

                                                
                                                

                                                
                                                
>>> k8s: cilium deployment container(s) logs (previous):
error: context "cilium-503505" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe kube-proxy daemon set:
error: context "cilium-503505" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe kube-proxy pod(s):
error: context "cilium-503505" does not exist

                                                
                                                

                                                
                                                
>>> k8s: kube-proxy logs:
error: context "cilium-503505" does not exist

                                                
                                                

                                                
                                                
>>> host: kubelet daemon status:
* Profile "cilium-503505" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-503505"

                                                
                                                

                                                
                                                
>>> host: kubelet daemon config:
* Profile "cilium-503505" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-503505"

                                                
                                                

                                                
                                                
>>> k8s: kubelet logs:
* Profile "cilium-503505" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-503505"

                                                
                                                

                                                
                                                
>>> host: /etc/kubernetes/kubelet.conf:
* Profile "cilium-503505" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-503505"

                                                
                                                

                                                
                                                
>>> host: /var/lib/kubelet/config.yaml:
* Profile "cilium-503505" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-503505"

                                                
                                                

                                                
                                                
>>> k8s: kubectl config:
apiVersion: v1
clusters:
- cluster:
certificate-authority: /home/minitest/minikube-integration/7cd9f41b7421760cf1f1eaa8725bdb975037b06d-7160/.minikube/ca.crt
extensions:
- extension:
last-update: Sat, 20 Dec 2025 02:06:21 UTC
provider: minikube.sigs.k8s.io
version: v1.37.0
name: cluster_info
server: https://192.168.61.15:8443
name: pause-764015
- cluster:
certificate-authority: /home/minitest/minikube-integration/7cd9f41b7421760cf1f1eaa8725bdb975037b06d-7160/.minikube/ca.crt
extensions:
- extension:
last-update: Sat, 20 Dec 2025 02:06:28 UTC
provider: minikube.sigs.k8s.io
version: v1.37.0
name: cluster_info
server: https://192.168.39.3:8443
name: running-upgrade-413923
contexts:
- context:
cluster: pause-764015
extensions:
- extension:
last-update: Sat, 20 Dec 2025 02:06:21 UTC
provider: minikube.sigs.k8s.io
version: v1.37.0
name: context_info
namespace: default
user: pause-764015
name: pause-764015
- context:
cluster: running-upgrade-413923
user: running-upgrade-413923
name: running-upgrade-413923
current-context: running-upgrade-413923
kind: Config
users:
- name: pause-764015
user:
client-certificate: /home/minitest/minikube-integration/7cd9f41b7421760cf1f1eaa8725bdb975037b06d-7160/.minikube/profiles/pause-764015/client.crt
client-key: /home/minitest/minikube-integration/7cd9f41b7421760cf1f1eaa8725bdb975037b06d-7160/.minikube/profiles/pause-764015/client.key
- name: running-upgrade-413923
user:
client-certificate: /home/minitest/minikube-integration/7cd9f41b7421760cf1f1eaa8725bdb975037b06d-7160/.minikube/profiles/running-upgrade-413923/client.crt
client-key: /home/minitest/minikube-integration/7cd9f41b7421760cf1f1eaa8725bdb975037b06d-7160/.minikube/profiles/running-upgrade-413923/client.key

                                                
                                                

                                                
                                                
>>> k8s: cms:
Error in configuration: context was not found for specified context: cilium-503505

                                                
                                                

                                                
                                                
>>> host: docker daemon status:
* Profile "cilium-503505" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-503505"

                                                
                                                

                                                
                                                
>>> host: docker daemon config:
* Profile "cilium-503505" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-503505"

                                                
                                                

                                                
                                                
>>> host: /etc/docker/daemon.json:
* Profile "cilium-503505" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-503505"

                                                
                                                

                                                
                                                
>>> host: docker system info:
* Profile "cilium-503505" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-503505"

                                                
                                                

                                                
                                                
>>> host: cri-docker daemon status:
* Profile "cilium-503505" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-503505"

                                                
                                                

                                                
                                                
>>> host: cri-docker daemon config:
* Profile "cilium-503505" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-503505"

                                                
                                                

                                                
                                                
>>> host: /etc/systemd/system/cri-docker.service.d/10-cni.conf:
* Profile "cilium-503505" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-503505"

                                                
                                                

                                                
                                                
>>> host: /usr/lib/systemd/system/cri-docker.service:
* Profile "cilium-503505" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-503505"

                                                
                                                

                                                
                                                
>>> host: cri-dockerd version:
* Profile "cilium-503505" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-503505"

                                                
                                                

                                                
                                                
>>> host: containerd daemon status:
* Profile "cilium-503505" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-503505"

                                                
                                                

                                                
                                                
>>> host: containerd daemon config:
* Profile "cilium-503505" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-503505"

                                                
                                                

                                                
                                                
>>> host: /lib/systemd/system/containerd.service:
* Profile "cilium-503505" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-503505"

                                                
                                                

                                                
                                                
>>> host: /etc/containerd/config.toml:
* Profile "cilium-503505" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-503505"

                                                
                                                

                                                
                                                
>>> host: containerd config dump:
* Profile "cilium-503505" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-503505"

                                                
                                                

                                                
                                                
>>> host: crio daemon status:
* Profile "cilium-503505" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-503505"

                                                
                                                

                                                
                                                
>>> host: crio daemon config:
* Profile "cilium-503505" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-503505"

                                                
                                                

                                                
                                                
>>> host: /etc/crio:
* Profile "cilium-503505" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-503505"

                                                
                                                

                                                
                                                
>>> host: crio config:
* Profile "cilium-503505" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-503505"

                                                
                                                
----------------------- debugLogs end: cilium-503505 [took: 3.952808317s] --------------------------------
helpers_test.go:176: Cleaning up "cilium-503505" profile ...
helpers_test.go:179: (dbg) Run:  out/minikube-linux-amd64 delete -p cilium-503505
--- SKIP: TestNetworkPlugins/group/cilium (4.13s)

                                                
                                    
Copied to clipboard